Thursday, September 27, 2012

Three Ways to Dominate the Marketing Automation Industry

I wrote last August that it’s still possible for new B2B marketing automation vendors to challenge the industry leaders. This was based on the observation that several of the smaller vendors have quickly reached the 1,000 client benchmark. But it didn’t answer the more interesting question of what it would take for a new vendor to really bypass the current leaders.

That particular question came up repeatedly during Dreamforce last week. The answer may not matter to marketers who don’t themselves work for a marketing automation vendor. But I think it’s worth pondering anyway, if only as an interesting case study in business strategy.

My own answer is: at stage of the industry, the basic features of marketing automation are pretty much set, so radically different features are not likely to emerge as a major competitive advantage. (That’s not to say new features won’t be important, particularly extensions into areas like social and mobile. But new features won’t be enough because they can be copied too quickly if they're really popular.) Rather, a new industry leader would have to remove the critical bottleneck to industry growth: the shortage of marketers with the skills needed to fully use marketing automation capabilities.

I don’t think I need to spend too much time defending that particular premise: if you want a data point, how about the widely quoted Sirius Decisions figure that 85% of marketers do not believe they are using their marketing automation platform to the fullest. Let's move onto the more important question: how could a vendor change the situation?

It seems to me there are three ways to approach this:

- make the systems radically easier to use. This is by far my preferred solution. It may seem an unobtainable goal: after all, ease of use has been a top priority of marketing automation vendors for years, and you’d think that by now all those smart people would have made things about as easy as they can be. But I think the right basis of comparison is Google AdWords, which made entry-level search engine marketing so incredibly simple that pretty much anyone can do it with no training at all.

As with AdWords, a radically simpler marketing automation system would just ask users to make a handful of basic decisions about content and target audience, and would build everything else automatically. Again like AdWords, the system would automatically optimize the programs based on results. This implies a degree of automation well beyond today’s marketing automation products, although increasingly common features like dynamic content and integrated predictive modeling offer a hint at how it could happen.

You could argue that marketers don’t want to delegate so much responsibility to a system, but many seem to have delegated to AdWords quite happily. Of course, AdWords also lets more sophisticated marketers make more decisions for themselves, and I’d expect any marketing automation system to provide that option as well. And, again as with search engine marketing, I’d expect the most sophisticated marketers to adopt more specialized systems than AdWords itself—but those will marketers will remain a minority.


- make it radically easier for marketers to use existing functions. This is not about making the functions themselves simpler: per my earlier comment, I’ll accept that all those smart folks have done about as much as possible in that direction. But I think more can be done to help marketers learn to use those functions more quickly and with less work.

What I have in mind specifically is “just in time” approaches that make it very easy for marketers to learn how to do a new task once they've started it, rather than taking separate training classes or looking up detailed instructions. This means context-sensitive help functions that can guess what you’re trying to do and offer advice when you seem to be having trouble. It also means lots of little instructional snippets instead of monolithic tutorials that have to be consumed all at once. This is standard stuff in the software industry, although some companies do it much better than others.

I think a marketing automation vendor who really focused on this would have a major advantage among new users, who are exactly the key audience.  If you want a specific benchmark for this approach, it’s that people can perform tasks with zero advance training.

- provide services so marketers don’t need to use the systems themselves. Quite a number of vendors have taken the services-based approach. In a way, it’s an admission of defeat: no, we really can’t make the systems simple enough for mere mortals. But I'd be happy to trade pride for success.

 The trick to this approach is to keep the service cost low enough that you can actually make money.  That comes down to things like prepackaged templates for creative materials and campaign flows, highly automated processes so the service staff can work efficiently, and standardized methodologies so inexperienced (ok, that's a euphemism for low cost) individuals can be easily trained to provide adequate service. Again, these are pretty standard things but I don’t think any vendor has really designed their system and business model around them.

Note that a system designed for efficient use by highly trained service people would look quite different from one designed for easy use and learning by lightly trained end-users. So this approach would really imply fundamental change in how vendors build their products.

As I said earlier, my preferred option is the first one, making systems radically simpler. But I’m guessing the more practical one is the middle choice of providing more effective help using systems similar to today’s. It’s possible that middle option isn’t viable: maybe vendors can’t provide enough additional help to make a difference. But I don’t think I’ve seen any vendor really focus on that option – and won’t concede I’m wrong until I have.

Saturday, September 22, 2012

Marketing Automation Beer Goggles: What I Think I Learned at Dreamforce


I’m writing this on my way home from Dreamforce, the Salesforce.com user conference that has become the primary industry gathering for marketing automation vendors. With a reported 90,000 attendees (I didn't count them personally), the show is fragmented into many different experiences. My own experience was mostly talking to marketing technology vendors in the exhibit hall, private meetings, and maybe a party or two. I did attend the main keynote and the “marketing cloud” announcement, but neither contained  major product news and the basic story – that social networks change everything – was true but far from novel.

So what did I learn? On reflection, there were two themes that hadn’t expected when I arrived.

The first was data. I generally think of marketing systems as relying primarily on data from the company’s own marketing, sales, and operational systems. But the exhibit hall was filled with vendors offering information – mostly from Web crawling or social media – to supplement the company’s internal resources. Of course, this isn’t new but it seems that external sources are becoming increasingly important. The main reason is so much valuable public information is now available. A lesser factor may be that there’s less internal information, at least for sales and marketing, because so many prospects engage indirectly and anonymously until deep in the buying process.

But there’s more to data than the data itself. The theme includes easier connectivity to external data, via standard connectors in general and the Salesforce.com AppExchange in particular. A closely related trend is real-time, on-demand access to the external data: say, when a salesperson views a lead record or a lead is first added to marketing automation. This requires immediate matching to find the right person in the supplier’s database, and, sure enough, matching was another popular technology on the show floor. I also saw broader use of Hadoop to handle all this new data: as you probably know, Hadoop effectively handles large volumes of unstructured and semi-structured data, so it’s a key enabling technology for data expansion. A final component is continued growth in the reporting, analytics, and predictive modeling systems that make productive use of the newly-available data.

Some products combine all these attributes, others offer a few, and some just one. Obviously a single integrated solution is easiest for the buyer, but as Scott Brinker recently pointed out in an insightful blog post, platforms like Salesforce.com may actually make it practical for marketers to mix and match individual products without the technical pain traditionally associated with integration. It therefore makes sense to view the data-related systems as a cluster of capabilities that will develop as parts of single ecosystem, collectively raising the utility and importance of external data to marketers.

The second theme, considerably less grand, was lead scoring. I suppose this is really just a subset of the analytics component of the data theme, but I saw enough new lead scoring features from enough different vendors to treat it separately. In particular, predictive modeling vendor KXEN announced a free, cloud-based service to automatically score a new Salesforce.com lead’s likelihood of converting into a contact. (If you’re not familiar with Salesforce.com terminology: contacts are linked to an account, while leads are not. The conversion usually indicates the salesperson has deemed the person a valid prospect and is thus a critical stage in most sales processes.)

The KXEN service requires absolutely no set-up; users just install it from the AppExchange. KXEN then reads the data, builds a predictive model based on past results, and returns the scores on current leads. From a technical standpoint, the modeling is nothing new, and indeed the people I met at the KXEN booth seemed to feel the product was barely worth discussing. But I’ve long felt that an automated, predictive-model-based scoring service was a major business opportunity because it would replace the time-consuming, complicated, and surely suboptimal lead scoring models that most companies now build by hand, usually with little basis in real data. Of course, there are plenty of other predictive modeling systems available for marketers, but I’m excited because I don’t think anyone else has made model-based lead scoring as simple as the KXEN offering. Maybe I need to get out more.

Speaking of which, I met SetLogik at a loud party after several glasses of wine, so I may have been wearing the marketing technology equivalent of beer goggles. But if I understood correctly, it tackles the really hard part of revenue attribution by using advanced matching technologies to connect the right leads and contacts to sales (reflected in closed opportunities in Salesforce.com). Once you’ve done that, determining which marketing touches influenced those people is relatively easy.  It’s a unique solution to a huge industry problem. Come to think of it, correct linkages are also critical for building effective lead scoring models, which it turns out that SetLogik also does. (I'll admit it: I Googled them the next day.) So they're part of that theme as well.

As I mentioned earlier, data and lead scoring were themes that emerged for me during the conference. I did have some other themes in mind when I started, which are also worth sharing. I’ll do that another day.

Finally, it’s worth noting that the conference itself was tremendously well run. It sometimes felt that one-third of those 90,000 people were Salesforce.com employees hired to stand around and answer questions. Where they found so many cheerful people outside of the Midwest I’ll never know. Congratulations and thanks to the Salesforce.com team that made it happen.

Friday, September 14, 2012

ClickDimensions Grows Quickly by Offering B2B Marketing Automation as a Microsoft Dynamics CRM Add-On

When I first wrote about ClickDimensions in a February, 2011 post, the concept was intriguing – a marketing automation add-on to Microsoft Dynamics CRM – but the product itself had been available for less than six months and claimed barely 50 clients. Since then, the company has grown its customer base more than ten-fold (it won’t release specific figures), won the Dynamics Marketplace Solution Excellence Partner of the Year award, signed up more than 250 channel partners around the world, and attracted outside funding. Sounds like the idea has legs.

The product has matured as well. The most important addition is a flow builder that supports branching campaigns. This is a bit limited – each node can only have yes/no branches – but it includes a reasonable set of actions including send an email, wait, notify user, add or remove from list, and run CRM workflow. It can also check for whether a contact has opened an email or clicked on a link. This is comparable to standard marketing automation products.




Other enhancements include an expanded survey builder that can skip questions or pages based on previous answers; a/b testing (two splits only) within emails; subscription management; and improved builders for email, landing pages, and forms. The system already provided dynamic email content, although users have to write the selection rules in a scripting language – something many marketers will find intimidating. Web behavior tracking, lead scoring, and social discovery (searching for and importing public data on LinkedIn) are also available.

None of this would make ClickDimensions stand out from other marketing automation systems if it weren’t for its fundamentally different architecture. ClickDimensions works directly from the Dynamics CRM data files, rather than creating a parallel, synchronized database like most marketing automation products. Additional tables needed by ClickDimensions are also custom objects within the Dynamics system. The result is direct connection between the two systems. ClickDimensions functions are also accessed within the Dynamics interface.

ClickDimensions isn’t the only vendor to take this approach. CoreMotives (purchased last March by Silverpop) has a similar architecture within the Microsoft Dynamics world and Predictive Response (which I haven’t looked at in detail) is a similar add-on to Salesforce.com. Still, as the shortness of this list suggests, the dominant approach to marketing automation remains separate, synchronized data files.

This could well change: as marketing automation becomes more widely understood, it will be purchased by less sophisticated companies. These buyers are already customers of CRM resellers who can easily offer ClickDimensions and similar CRM add-on products. That gives the add-on vendors efficient access to a huge market. The CRM vendors themselves would have the same advantage should they choose to add marketing automation  features. 

In practice, most buyers neither know nor care about the architectural differences between the two approaches. So long as the add-on architecture will work – and there’s no reason to doubt it does for all but the very largest implementations – success may well be determined by who reaches the most buyers first. As ClickDimensions’ fast growth already suggests, its reseller-based approach could be a decisive advantage as the marketing automation industry enters its next stage.  Only time will tell.

Monday, September 03, 2012

Moving On: Lessons from the B2B Marketing Trenches

 
I’ve just ended my six month tour as VP Optimization at LeftBrain DGA, and am now returning full time to my usual consulting, writing, and general shenanigans. It was fun to work again as a hands-on marketer. Here are some insights based on the experience.

- lots of content. We all know that content is king, but sometimes forget the king has a voracious appetite. A serious demand generation program might move contacts through half dozen stages with several levels within each stage and several messages within each level. This could easily come to forty or fifty messages, each offering a different downloadable asset.  The numbers go even higher when you start to create separate streams for different personas. Building these materials is major undertaking, first to understand what’s appropriate and then to create it. But deploying the initial content is just the start: you then have to monitor performance, test alternatives, and periodically refresh the whole stream. Finding efficient ways to do this is critical to keeping costs and schedules within reason. (Note that I’m talking here about email programs to nurture known contacts, not acquisition programs to attract new names. That takes another massive content collection.)

- content isn’t everything. It’s an old saw among direct marketers that the list determines most of your response rate and the offer controls for most of the rest.  Actual creative execution (copy, graphics, format, etc.) accounts for maybe 10% of the result. We proved this repeatedly with tests that used the different content at the same stage in the campaign flow: basically, results were similar even with content originally designed for different purposes. Conversely, the same piece of content had hugely different results at different places in the flow. What this meant in both cases was that response was primarily driven by the people at each stage, not by the specifics of the materials presented.

- simplicity helps. That results are primarily driven by audience doesn’t mean that content doesn’t matter. We did a fascinating (to me, at least) analysis of 100 emails, logging specific features such as number of words and readability scores and then comparing these against open, click-through, and form submit rates. A clear pattern emerged: simpler emails (shorter, fewer graphics, easier to read) performed better. In fact, the pattern was so clear that there's a danger of over-reaction: at some point, a message can be too short to be effective (think of the mayor in The Simpsons, who just repeats “Vote for Me”). So the real trick is to find an optimal length, and even then to recognize that some messages truly need to be longer than others.

- simplicity isn’t everything, either. We did a lot of testing – it was my favorite part of my job – but the content tests were often inconclusive: sometimes shorter won, sometimes longer won, most often the difference was too small to matter. Given that we were starting with competently-created materials, that’s not too surprising. On the other hand, we consistently found that forms with fewer questions yielded better results, typically by a ratio of 3:1. This is one example of a non-content item with major impact; another was contact frequency (more is better, but, as with simplicity, only up to a point). There were other aspects of program structure that I would have tested had time and resources permitted; the goal was to focus on variables with the potential for a substantial impact on over-all results. This generally meant moving beyond individual content tests to items with larger and more global impact.

- test themes, not details. Don’t misinterpret that last sentence: I’m not against content tests. What I'm against is tests that only teach one small, random lesson, such as whether subject line A is better than subject line B. The way to build more powerful tests is to build them around a hypothesis and then try several simultaneous changes that support or refute that hypothesis. (I’ve shamelessly stolen this insight from Marketing Experiments, whose methodology I hugely admire and highly recommend.) So, if you think simplicity is an issue, create one test with shorter subject line and less copy and fewer graphics and a simpler call to action, and run that against your control. This is exactly the opposite of conventional testing advice of changing just one thing at a time.  That approach made sense back in the days of direct mail when you were running a handful of versions per year, but isn’t an option in the content-intensive environment of modern online marketing. And even if you had the resources to run a gazillion separate tests, you’d still need to see larger patterns to guide your future content creation.

- multivariate tests work. As if the infinite number of potential tests were not enough of a challenge, most B2B marketers also have relatively small program quantities to work with. We multiplied our test volume by applying multivariate test designs, which let us use the same contacts in several different test cells simultaneously. This probably needs a post of its own, but here's a quick example: Let’s say you need 10,000 names per test cell and have 20,000 names total. Traditionally, you could just run one test comparing two choices. But with a multivariate design, you’d create four cells of 5,000 each. Cells 1 and 2 would get the first version of the first test, while cells 3 and 4 would get the second version. But – and here’s the magic – cells 1 and 3 would also get the first version of the second test, while cells 2 and 4 would get the second version of the second test. Thus, each test gets the required 10,000 names, but you can still see the impact of each test separately. (Here’s a random article that seems to do a good job of explaining this more fully.). We generally limited ourselves to two or three tests at a time. More complicated structures are possible but I was always concerned about keeping execution relatively simple since we were doing all our splitting manually.

- metrics matter. As it happens, most of the programs we executed rely heavily on form submissions to move people to the next stage. This meant that form fills were the key success metric, not opens or click-throughs. Although these generally correlate with each other, the relationship is weaker than you might expect.  Some exceptions were due to obvious factors such as differences in form length, but the reasons for others were unknown.  (I often suspected but could never prove reporting or data capture issues.)  Of course, most email marketers are used to looking at open and click rates, so it took some gentle reminding to keep everyone focused on the form fill statistics. The good news is we prevented some pretty serious mistakes by using the right measure.  Note that form fills are especially important in acquisition programs responders are lost altogether if don't complete a form that let you add them to your database.

- test results need selling. As you’ve probably guessed by now, I spent much of time lovingly crafting our tests and analyzing the results. But others were not so engaged: more than once, I was asked what we found in a test whose results I had published weeks before. This wasn’t a complete surprise, since other people had many other items on their mind. But we did eventually conclude that simply publishing the results was not enough, and started to go through the results in person during weekly and monthly status meetings. We also found that reviewing individual results was not enough; when we found larger patterns worth reporting, we had to present them explicitly as well. Again, there’s no surprise in this, but it does bear directly on expectations that managers will find important data if reporting systems simply make it available. Most will not: the systems have to go beyond reporting to highlight what’s new, what it means, why it matters, and what to do next. Although some parts of that analysis can be automated, most of it still relies on skilled human effort.

- reports need context. Reporting was another of my responsibilities, and we made great strides in delivering clearer and more actionable data to our clients. One of the things I already knew but was reminded really matters was the importance of putting data in context. It wasn’t enough just to show cumulative quantities or conversion statistics; we needed to compare this data with previous results, targets, and other programs to give a sense of what it meant. To take one example, we reported the winner of a series of email package tests, without realizing until late in the analysis that the response rate for the test as a whole was much lower than previous results. This was a more important issue that the tests themselves. We had other instances where entire waves were missing from reports; we only uncovered this because someone noticed they were missing – whereas, a proper comparison against plan would have highlighted it automatically. Again, such comparisons are widely acknowledged as a best practice: my point here is they have immediate practical value, so they shouldn't just be relegated to the list of “nice but not necessary” things that no one ever quite gets around to doing.

- survival is more important than conversion. That phrase has a vaguely religious ring to it, and I suppose it’s also true in a theological sense. But right now I’m talking about reporting of survival rates (how many people who enter a nurture program actually end up as customers) vs. conversion rates (how many people move from one program stage to the next). Marketers tend to focus on conversion rates, and of course it’s true that the survival rate is mathematically the product of the individual conversion rates. But we repeatedly saw changes in program structure or even individual treatments that caused large swings in a single conversion rate, which was often balanced by opposite changes in the following stage. Looking at conversion rates in isolation, it was hard to see those patterns.  This was an even bigger problem when each rates was calculated cumulatively, so the impact of a specific change was masked by being merged into a larger average. More important, even when there was an obviously related change in two successive rates, the net combined impact wasn’t self-evident. This is where survival rates come in, since they directly report the cumulative result of all preceding stages. Of course, conversion rates and survival rates are both useful: I'm arguing you need to report them both, not just conversion rates alone.

- throughput matters. Survival and conversion rates show the shape of the funnel, but not the dimension of time. We did report how long it took contacts to move through our programs – in fact, a sophisticated and detailed approach was in place before I arrived – but the information was largely ignored. That was a pity, because it contained some important insights about contact behaviors, opportunities for improvement, and results of particular tests. A greater focus on comparing expected vs. actual results would have helped, since calculating the expectations would have probably required a closer focus on how long it took leads to move through the funnel.

- acceleration is hard. A greater focus on timing would have also forced a harder look at the fundamental premise of many B2B campaigns, which is that they can speed movement of prospects through the sales funnel. The more I think about this, the more doubts I have: B2B purchases move according to their own internal rhythms, driven by things like budget cycles, contract expirations, and management changes. Nurture programs can educate potential buyers and build a favorable attitude towards the seller, thereby increasing the likelihood of making a sale once the buyer is ready. They can also track, through lead scoring, when a buyer seems ready to act and is thus ripe for contact by sales. That’s all good and valuable and should more than justify the nurture program’s existence. But expectations of acceleration are dangerous because they may not be met, and could unfairly make a successful program look like a failure.

- drip needs attention.  Like that leaky faucet you never quite get around to fixing, drip programs often don't get the attention they deserve.  In practice, the vast majority of people who enter a nurture program will not move quickly to the purchase stage; most will stall somewhere along the way. This is where the drip program must work hard to keep them engaged. Again, every marketer knows this, but it’s easy to focus attention on the fascinating and complicated stage progressions (remember all that content?) and relegate the drip campaigns to a simple newsletter. Big mistake. Put as much effort into segmenting your drip communications and encouraging response as you put into stage conversions. If you want a practical reason for this, look at your mail quantities: chances are, you’re actually sending more drip emails than all your active stages combined.

- proving value is the ultimate challenge. It’s relatively easy to track contacts as they move through the marketing funnel, but it’s much harder to connect them to actual revenue in the sales or accounting systems. I whined about this at length in June, so I won’t repeat the discussion. Suffice it to say that some sort of revenue measurement, however imperfect, is necessary for your testing, reporting, and program execution to be complete.

Whew, it’s good to have all that out of my system. As I said at the beginning, I did enjoy my little visit to the marketing trenches. Now, it’s goodbye to that world and hello to what’s next.