Account Based Marketing. Perhaps you’ve heard of it?
Okay, just kidding: ABM gets just slightly less attention than Donald Trump and arguably generates a similar amount of confusion. Many of the industry vendors are addressing that problem (the confusion, not Trump) by working together in the Account Based Marketing Consortium which, among other things, has published an excellent survey and nifty six-level functional framework. You can download them both here.
The purpose of the framework is to help marketers understand the differences between ABM vendors. Let’s apply it to ABM Consortium member Azalead, a Paris-based firm that is planning to enter the U.S. market.
We’ll start with an overview of what Azalead does. It lets marketers create lists of target accounts by identifying Web site visitors (at the company level) based on IP address, by reading data through a CRM system integration, or by uploading lists any source. Users can export account lists for retargeting, connect via API with the company Web site to personalize messages shown to visitors from target firms, or send lists to CRM. Azalead tags can be embedded in Web pages or emails to track response. Opportunities and revenues can also be imported from CRM. The system reports on impressions (i.e., messages sent), responses, opportunities and pipeline value for targeted accounts and gives salespeople lists of all identified Web site visitors or of visits from target accounts. It can also rank accounts with a system-generated engagement score.
Now that you have a more or less coherent view of Azalead functions, we can map these to the Consortium framework.
Account Selection: users can select accounts from the list of identified Web site visitors, from the list of accounts imported from CRM, or from any other list uploaded to the system. There is no predictive scoring although users can build lists by filtering on whatever attributes have been attached to the records, such as industry, country, or company size. Azalead has created its own technology to identify site visitors whose IP address cannot be tied directly to a specific company. Results vary, but in Europe this can increase the percentage of identified visitors from the usual 30% to as high as 50%..
Insights: once a company has been identified, Azalead can present users – typically sales people – with a screen that shows the company name, basic information (industry, revenues, etc.) from an external database, engagement history as captured by Azalead’s Web and email tags, and contact names from the CRM system.
Content: Azalead doesn’t create content.
Orchestration: Azalead can create lists based on engagement level, opportunity stage (imported from CRM), and other account attributes. Different lists can be selected for different marketing treatments.
Delivery: Azalead tags on a company Web site can both identify visitors and return basic parameters including company name, industry and size. These parameters can drive personalized Web site messages. Messages in other channels are generated by sending lists to marketing automation, CRM, retargeting, or other external systems.
Measurement: Azalead offers a summary dashboard and more detailed information on retargeting, display ads, and Web site visitors. A pipeline impact report plots sales opportunity probability against marketing engagement for individual deals, helping marketers to see the impact of their efforts.
So, what information is missing here? The framework doesn’t explicitly cover integration, arguably because that’s a supporting technology rather than a functional capability. Azalead has API-level integration with Salesforce.com and Microsoft Dynamics CRM. A general purpose API allows custom integration with Web sites and other systems. The company is working on API integration with major marketing automation platforms, but currently uses its own tags to track response to emails sent by those systems.
The framework also doesn’t including pricing or vendor background, which are also not functional capabilities. Azalead pricing is published on its Web site. A limited system starts at $1,000 per month for a 3,000 monthly Web visits and 400,000 ad impressions. A system with all features starts at $3,200 per month for 20,000 Web visits and 1.4 million impressions. The company was founded in 2013 and currently has about 80 customers, mostly tech companies in Europe. It plans to open a New York office later this year.
This is the blog of David M. Raab, marketing technology consultant and analyst. Mr. Raab is founder and CEO of the Customer Data Platform Institute and Principal at Raab Associates Inc. All opinions here are his own. The blog is named for the Customer Experience Matrix, a tool to visualize marketing and operational interactions between a company and its customers.
Thursday, February 25, 2016
Wednesday, February 24, 2016
Why Designing Your Marketing Technology Stack is a Waste of Time
My post last week about machine intelligence sparked a Twitter comment from @Jetlore, “The term 'machine learning' is like the term 'mobile' 7-10 years ago. It's simply something that all good software will do.” On reflection, this is absolutely correct – it is why there are already so many different uses of machine learning across the marketing landscape. This got me thinking about whether we could learn something from other technologies that were once bleeding edge but later became commonplace.
The most obvious of those is electricity. Nicholas Carr has explored the analogy between electric power utilities and information technology utilities in his books, but I haven’t (yet) read them. Instead I did a bit of my own research into the early twentieth century transition from steam to electric power in factories, eventually finding the key information in a paper From Shafts to Wires: Historical Perspectives on Electrification by Warren D. Devine, Jr., (The Journal of Economic History, June 1983).
The story turned out to be quite interesting, at least to me. In 1898 electric motors provided less than 5% of factory power. By 1929 they provided nearly 80%. That 30 year span means a generation of managers spent their entire careers dealing with the shift. More precisely, they dealt with many shifts: the first electric motors simply drove the same overhead shafts that had previously been powered by steam engines or water wheels (leather belts transferred power from the shafts to individual machines). Then the motors drove numerous small shafts instead of one big shaft; then separate motors were attached to individual machines; finally, the machines themselves were redesigned to take advantage of having a motor of their own. Once the machines had been optimized for electric motors and factories had been redesigned to make the best use of this new configuration, the pace of change slowed down.
The analogy with machine learning and with marketing technology in general is clear. Initial applications fit the new technology into the old process: that’s why I love this picture of a robot secretary, which was someone’s initial (presumably joking) idea of how computers could replace human secretaries.* Applications then evolve into something completely different as people uncover the best ways to use the new technology. Those changes create other changes in related systems: getting rid of the overhead power shafts let factories become bigger and more efficient because machines could be placed anywhere and the ceiling was now free for better lights. ventilation, and overhead cranes. One article quoted Henry Ford as saying that his moving assembly line would have been impossible without electrification.
The obvious lesson is that marketers should also expect continued flux as new technologies are invented and refined. But while the need to plan for such change is a commonplace among industry gurus, myself included, I haven’t seen much attention paid to the less-obvious conflict between planning for change and standard approach of defining requirements, designing an architecture to meet those requirements, and then buying components to flesh out that architecture. Just as the physical architecture of factories changed as electric motors were deployed in different and more effective ways, the architecture of marketing systems can be expected to change as the technologies mature.
This means managers need tools designed to deal with continuous change. These include systematic ways to decide when to adopt a new technology and when to wait for further improvements, and ways to ensure that a technology you adopt doesn’t prevent you from taking advantage of future technology that is more important. Early twentieth century managers invented industrial engineering, standardized fittings, and return on investment analysis for precisely those reasons. Todays’ marketing technology managers need similar tools but I don’t hear much discussion about how to create them.
The second less-obvious point, although I guess we can credit it to Carr, is that the reward for successfully managing these continuous changes is nothing more than survival. Like today’s marketing technology, electric motors were purchased from outside suppliers who made the same equipment available to everyone. Sound choices were essential and making the wrong choice could be fatal (literally, where electricity was involved). But being a smart, fast follower was good enough; being a pioneer or master user of the new technology didn't ultimately matter because your surviving competitors ended up with similar tools. The final success of firms depended on the quality of their products, distribution, and, yes, marketing, not in whether they used electric motors. This meant that electricians, who at one point were considered super-elite if not magical, ultimately became nothing more than valued but prosaic craftsmen. I suppose that will be the fate of marketing technologists as well: today we are river pilots navigating a wild rapids, an exhilarating task with life-or-death responsibility. But at some point we’ll reach calmer waters, and then we’ll seem, and be, less important.
___________________________________________________________________
* On the other hand, conspiracy theorists: why does the robot in this picture have a reflection while the lady does not? Perhaps she is the true alien, sent to infiltrate our planet with her diabolically irresistible technology.
The most obvious of those is electricity. Nicholas Carr has explored the analogy between electric power utilities and information technology utilities in his books, but I haven’t (yet) read them. Instead I did a bit of my own research into the early twentieth century transition from steam to electric power in factories, eventually finding the key information in a paper From Shafts to Wires: Historical Perspectives on Electrification by Warren D. Devine, Jr., (The Journal of Economic History, June 1983).
The story turned out to be quite interesting, at least to me. In 1898 electric motors provided less than 5% of factory power. By 1929 they provided nearly 80%. That 30 year span means a generation of managers spent their entire careers dealing with the shift. More precisely, they dealt with many shifts: the first electric motors simply drove the same overhead shafts that had previously been powered by steam engines or water wheels (leather belts transferred power from the shafts to individual machines). Then the motors drove numerous small shafts instead of one big shaft; then separate motors were attached to individual machines; finally, the machines themselves were redesigned to take advantage of having a motor of their own. Once the machines had been optimized for electric motors and factories had been redesigned to make the best use of this new configuration, the pace of change slowed down.
The analogy with machine learning and with marketing technology in general is clear. Initial applications fit the new technology into the old process: that’s why I love this picture of a robot secretary, which was someone’s initial (presumably joking) idea of how computers could replace human secretaries.* Applications then evolve into something completely different as people uncover the best ways to use the new technology. Those changes create other changes in related systems: getting rid of the overhead power shafts let factories become bigger and more efficient because machines could be placed anywhere and the ceiling was now free for better lights. ventilation, and overhead cranes. One article quoted Henry Ford as saying that his moving assembly line would have been impossible without electrification.
The obvious lesson is that marketers should also expect continued flux as new technologies are invented and refined. But while the need to plan for such change is a commonplace among industry gurus, myself included, I haven’t seen much attention paid to the less-obvious conflict between planning for change and standard approach of defining requirements, designing an architecture to meet those requirements, and then buying components to flesh out that architecture. Just as the physical architecture of factories changed as electric motors were deployed in different and more effective ways, the architecture of marketing systems can be expected to change as the technologies mature.
This means managers need tools designed to deal with continuous change. These include systematic ways to decide when to adopt a new technology and when to wait for further improvements, and ways to ensure that a technology you adopt doesn’t prevent you from taking advantage of future technology that is more important. Early twentieth century managers invented industrial engineering, standardized fittings, and return on investment analysis for precisely those reasons. Todays’ marketing technology managers need similar tools but I don’t hear much discussion about how to create them.
The second less-obvious point, although I guess we can credit it to Carr, is that the reward for successfully managing these continuous changes is nothing more than survival. Like today’s marketing technology, electric motors were purchased from outside suppliers who made the same equipment available to everyone. Sound choices were essential and making the wrong choice could be fatal (literally, where electricity was involved). But being a smart, fast follower was good enough; being a pioneer or master user of the new technology didn't ultimately matter because your surviving competitors ended up with similar tools. The final success of firms depended on the quality of their products, distribution, and, yes, marketing, not in whether they used electric motors. This meant that electricians, who at one point were considered super-elite if not magical, ultimately became nothing more than valued but prosaic craftsmen. I suppose that will be the fate of marketing technologists as well: today we are river pilots navigating a wild rapids, an exhilarating task with life-or-death responsibility. But at some point we’ll reach calmer waters, and then we’ll seem, and be, less important.
___________________________________________________________________
* On the other hand, conspiracy theorists: why does the robot in this picture have a reflection while the lady does not? Perhaps she is the true alien, sent to infiltrate our planet with her diabolically irresistible technology.
BlueConic Launches Marketing Technology Self-Assessment Tool
It's a little more than a year since I collaborated with BlueConic on a marketing technology maturity model. They've been busy improving their product, in particular by adding a set of templates for prebuilt marketing programs, which they call "blueprints". Users first select a goal, such as "decrease bounce rates". They are then led through a sequence of tasks to collect the necessary customer data; assemble the data into profiles; segment customers using the profiles; and deliver the messages to external systems. The goal is to make it easier for marketers to create useful programs with the system.
But BlueConic was also working on a little side project: an online self-assessment tool based on the maturity model. This asks users about a dozen questions about their marketing methods, business processes, and organizational resources. It then provides an assessment of how they compare with other companies and makes recommendations for how to make improvements. I provided much of the content, so obviously I'm biased, but I do think the results are pretty interesting and useful. And it's free...did I mention free? You can read more about the tool in this BlueConic blog post and access it here. Enjoy!
But BlueConic was also working on a little side project: an online self-assessment tool based on the maturity model. This asks users about a dozen questions about their marketing methods, business processes, and organizational resources. It then provides an assessment of how they compare with other companies and makes recommendations for how to make improvements. I provided much of the content, so obviously I'm biased, but I do think the results are pretty interesting and useful. And it's free...did I mention free? You can read more about the tool in this BlueConic blog post and access it here. Enjoy!
Sunday, February 21, 2016
Study: Half of Marketing Jobs Will Be Replaced by Machine Intelligence
One of the highlights of last week’s Content2Conversion conference was a keynote by the always-stimulating Tim Riesterer of Corporate Visions, who argued that an effective sales presentation should (1) start with an unfamiliar factoid that shows why change is essential (a concept similar to the CEB "challenger sale") (2) show that you have a solution and (3) contrast your solution with other approaches to clarify how it's different and better. Riesterer’s own talk followed exactly that template, a bit of consistency I always admire. This in turn got me thinking about my presentation on machine learning systems at the MarTech conference in March. I pretty much finished drafting it last week, but it was still an interesting exercise to imagine recasting it along the lines Riesterer proposed.
Linear thinker that I am, this meant first looking for an appropriate factoid about why the growth of machine intelligence poses a threat that can’t be ignored. This led to several hours of research into what’s being written about machine intelligence, which I'll somewhat sheepishly admit is my idea of a good time. A reasonable starting question was how many marketing jobs are threatened with replacement by intelligent systems. Some quick Googling led to an article that quoted economist W. Brian Arthur as estimating that machines could replace 100 million U.S. jobs by 2025.* That’s pretty scary but a second, even more intriguing article quoted a study by Oxford economists Carl Benedikt Frey and Michael A. Osborne that estimated the probability of 702 specific job categories being replaced by “computerisation” (British spelling). This offers the possibility of showing how much employment risk is faced by marketers in particular.
Frey and Osborne list two categories with “marketing” in their title:
- “Marketing Managers” with a 1.4% probability of computerization, and
- “Market Research Analysts and Marketing Specialists” with a 61% probability.
Frey and Osborne did their calculations based on the assumption that it will be hardest to computerize jobs that require manual dexterity, creativity, and social skills. These assumptions may already be obsolete – the paper was written in 2013 and this 2014 video by machine learning expert Jeremey Howard suggests that new developments in “deep learning” are making machines more powerful than anticipated, especially in areas relating to creativity and social interaction. Frey and Osborne also conclude that management jobs are relatively safe in part because managers need social skills to motivate their staff – a need that will diminish if the staff is largely replaced by machines. So I'd say there's a good chance that all but the most senior jobs are less secure than Frey and Osborne suggest.**
That's all interesting, but what does it mean for my presentation? Let’s go back to Riesterer’s three-part template.
- The first step was proving that change is necessary. I think showing marketers that half to two-thirds of their jobs will vanish in the next ten years should do the trick.
- The second step was offering a solution. I’m proposing that learning to manage intelligent machines will be the key to future success. My MarTech presentation will offer some specific suggestions on how to do that.
- The third step was contrasting the proposed solution with other approaches. That’s easy if the alternative is to continue with traditional marketing methods. It’s a bit harder if the alternative is making other types of changes, if only because you’d have to list what those alternatives might be. I can think of a few approaches I can easily out-argue, such as random experimentation or buying new technology without addressing organizational and process issues. A more challenging competitor is to focus on optimizing the customer experience rather than use of machine intelligence. Customer experience is inherently a more appealing focus because it sounds strategic and customer-centered while machine intelligence sounds narrowly mechanical. Still, the ultimate question is which approach will give better results, and I suspect machine intelligence – by making marketers more productive and thus freeing them to do more new things – will eventually win out. (Or not. You could argue that machine intelligence is like electricity: vastly better than its predecessors and destined to be ubiquitous, but something that vendors will make equally available to everyone, and thus not in itself a long-term competitive advantage.)
------------------------------------------------------------------------------------------------
*What Arthur actually said is machines in the U.S. could produce output equal to the 1995 U.S. economy, which employed 100 million people. Close enough. As a point of reference, the current U.S. total of jobs is around 150 million.
**To be clear, Frey and Osborne are explicit that they are NOT making predictions about whether, how quickly, or how many jobs will actually be lost. They do somewhat casually suggest it's reasonable to expect about half of U.S. jobs will be potentially computerizable in "a decade or two" but cite many factors that could prevent computers from actually taking those jobs.
Thursday, February 18, 2016
Future of Marketing Content: Reflections on the Content2Conversion Conference
I spent the early part of this week at Demand Gen Report's Content2Conversion conference. The event was superbly run, as usual, but I didn't sense any over-arching pattern until I was literally on my out the door and stopped for one last chat with some colleagues. Then I knitted together – at least to my own satisfaction – what had seemed to be disconnected observations.
The first strand was the number of systems that offer detailed information about content consumption. Vendors including Highspot, SnapApp, Ceros, Uberflip, and ion interactive all let marketers track customer behaviors within a piece of content – such as how much time is spent on each page or even regions within a page. On reflection, it struck me as amazing that we have this level of detail available, given that just a few years ago marketers couldn’t even tell whether a given piece of content had been looked at. The uses for this information are obvious, including helping marketers to understand which topics are most appealing and giving salespeople insight into the interests of individual prospects. But I wonder how many marketers or content creators are ready to take advantage of this information. Of course, it’s clear that they should. But I suspect most are already overwhelmed by the less precise information available through less advanced technologies. This leaves them with little appetite for still greater detail.
Naturally, my own preferred solution to this technology-created flood of data is still more technology. Some of this involves advanced analytics to extract the significant needles of information from the hayfields of detail, although I don’t recall seeing vendors who do that type of analysis at the show or hearing speakers discuss them. But the more interesting response is to automate content creation and selection directly, using the detailed information to create new content and to send the most appropriate content to each individual. Again, there weren’t many solutions at the show that promised to do this, apart from Captora – which extracts keywords from a company’s Web site and its competitors’ sites, constructs draft landing pages for the most important topics, and deploys them (after some manual polishing) with links to CRM or marketing automation data capture forms. Captora is focused on paid and organic search marketing, so it can’t pick which ads to display to which prospects. But I also chatted with people from Adaptive Campaigns (which did not exhibit), whose system uses rules to generate highly customized programmatic display ads. And, on the way to the airport, I caught up with Idio, another system that automatically analyzes content and picks the best match for each individual – although Idio doesn’t do any content creation or dynamic customization.
As you know from the Machine Intelligence in Marketing Landscape in my last post, I’ve also identified a several other systems that use automated methods to generate and select content. I’ll even predict that machine generated content will be a major trend in the near future – precisely because it’s the only practical way for marketers to take full advantage of the detailed information now available on content consumption.
This connects to another theme that I did actually hear articulated at the conference: the need to move beyond “quality” content to appropriate content. That’s an interesting evolution, since recent discussions have often focused on the challenge marketers face in just getting the volume of content they need for increasingly segmented programs. That requirement hasn’t ended, but I heard more discussion of how to create the right content mix and how to create content that is compelling enough to attract attention. To some extent, this argues against the notion of machine-generated content, which will probably never be better than mediocre and formulaic. But I can easily imagine a world where humans create a few great pieces of tentpole content and use a lot of simple, machine-created messages to feed people to it. The machine-based messages won't be brilliant but they'll be effective because they're highly tailored to their targets. This tailoring will be enabled by behavioral and intent data, which were also popular topics at the conference.
I also have one other observation, which was totally unexpected (the best type!). It might be just my imagination, but I think I sensed a bit of overconfidence among marketers about their ability to buy new technology. This is certainly surprising, given that marketers until recently have been more frightened of technology than anything else. I’ll speculate that a new generation of marketers are more comfortable with technology in general and are now reaching positions where they have control over purchasing decisions. Mostly that's great: the industry can’t advance if marketers are afraid to try new things. But some of these buyers may not realize that they are unfamiliar with the full scope of products available or that deploying complex technology is much harder than signing up for a new software-as-a-service application. Let me be clear that this concern is is based on one conversation I had and one comment that a friend overheard. So I might be overreacting. Still, it’s something to guard against; overconfidence can lead to cavalier decisions that are just as harmful as indecision based on fear.
The first strand was the number of systems that offer detailed information about content consumption. Vendors including Highspot, SnapApp, Ceros, Uberflip, and ion interactive all let marketers track customer behaviors within a piece of content – such as how much time is spent on each page or even regions within a page. On reflection, it struck me as amazing that we have this level of detail available, given that just a few years ago marketers couldn’t even tell whether a given piece of content had been looked at. The uses for this information are obvious, including helping marketers to understand which topics are most appealing and giving salespeople insight into the interests of individual prospects. But I wonder how many marketers or content creators are ready to take advantage of this information. Of course, it’s clear that they should. But I suspect most are already overwhelmed by the less precise information available through less advanced technologies. This leaves them with little appetite for still greater detail.
Naturally, my own preferred solution to this technology-created flood of data is still more technology. Some of this involves advanced analytics to extract the significant needles of information from the hayfields of detail, although I don’t recall seeing vendors who do that type of analysis at the show or hearing speakers discuss them. But the more interesting response is to automate content creation and selection directly, using the detailed information to create new content and to send the most appropriate content to each individual. Again, there weren’t many solutions at the show that promised to do this, apart from Captora – which extracts keywords from a company’s Web site and its competitors’ sites, constructs draft landing pages for the most important topics, and deploys them (after some manual polishing) with links to CRM or marketing automation data capture forms. Captora is focused on paid and organic search marketing, so it can’t pick which ads to display to which prospects. But I also chatted with people from Adaptive Campaigns (which did not exhibit), whose system uses rules to generate highly customized programmatic display ads. And, on the way to the airport, I caught up with Idio, another system that automatically analyzes content and picks the best match for each individual – although Idio doesn’t do any content creation or dynamic customization.
As you know from the Machine Intelligence in Marketing Landscape in my last post, I’ve also identified a several other systems that use automated methods to generate and select content. I’ll even predict that machine generated content will be a major trend in the near future – precisely because it’s the only practical way for marketers to take full advantage of the detailed information now available on content consumption.
This connects to another theme that I did actually hear articulated at the conference: the need to move beyond “quality” content to appropriate content. That’s an interesting evolution, since recent discussions have often focused on the challenge marketers face in just getting the volume of content they need for increasingly segmented programs. That requirement hasn’t ended, but I heard more discussion of how to create the right content mix and how to create content that is compelling enough to attract attention. To some extent, this argues against the notion of machine-generated content, which will probably never be better than mediocre and formulaic. But I can easily imagine a world where humans create a few great pieces of tentpole content and use a lot of simple, machine-created messages to feed people to it. The machine-based messages won't be brilliant but they'll be effective because they're highly tailored to their targets. This tailoring will be enabled by behavioral and intent data, which were also popular topics at the conference.
I also have one other observation, which was totally unexpected (the best type!). It might be just my imagination, but I think I sensed a bit of overconfidence among marketers about their ability to buy new technology. This is certainly surprising, given that marketers until recently have been more frightened of technology than anything else. I’ll speculate that a new generation of marketers are more comfortable with technology in general and are now reaching positions where they have control over purchasing decisions. Mostly that's great: the industry can’t advance if marketers are afraid to try new things. But some of these buyers may not realize that they are unfamiliar with the full scope of products available or that deploying complex technology is much harder than signing up for a new software-as-a-service application. Let me be clear that this concern is is based on one conversation I had and one comment that a friend overheard. So I might be overreacting. Still, it’s something to guard against; overconfidence can lead to cavalier decisions that are just as harmful as indecision based on fear.
Monday, February 15, 2016
Landscape of Machine Intelligence Systems for Marketing
I’ll be speaking next month at the MarTech conference on How Machine Intelligence Will Really Change Marketing. This required assembling a list of marketing systems using machine intelligence, which pretty much inevitably led to the logoscape below.
I wasn’t initially enthusiastic about the idea – could there by anything less original?—but have found the result surprisingly useful. In particular, it illustrates several points that would otherwise have been hidden or much harder to convey. These include:
I’ll draw some other lessons from this chart in my MarTech talk. You can still join us by registering here. In the meantime, I hope this chart helps you realize the scope of machine intelligence applications in marketing today and inspires you to explore more deeply how they can help in your own work.
I wasn’t initially enthusiastic about the idea – could there by anything less original?—but have found the result surprisingly useful. In particular, it illustrates several points that would otherwise have been hidden or much harder to convey. These include:
- Lots of systems. You may think that machine intelligence is still a pretty rare thing. Not so. I found 23 categories with 140 systems, and know there are dozens of other products I could have included.
- Some categories are already crowded. Boxes with a lot of logos have a lot of competitors. This doesn’t make them mature in the sense of having a widely accepted standard approach. But it does mean that many people have recognized they are a successful use for machine intelligence. Conversely, categories with few competitors are more speculative – although a few strike me as pretty sure to succeed in the end.
- Few systems for marketing strategy. Some research I’ll cite at MarTech suggests that marketers split their time roughly equally between strategy and planning, program design and content creation, and data management and analytics. I’ve classified vendors into those categories. I then make a further distinction between systems that help marketers with decisions and systems that make decisions without marketer involvement. This distinction is very loose, but that’s a topic for another day. What’s immediately obvious is there are very few systems to do strategy and planning, and none of those are actually deciders. My take on this is that CMOs aren’t ready to delegate strategic decisions to machines, although another explanation is that CEOs aren’t ready to delegate marketing strategy to the CMOs.
- Decider systems for design. The design category is crowded with systems for the established applications of personalization and programmatic ad bidding. Perhaps more surprising, there is also a rapidly growing number of products to create contents such as copy, email dialogs, and even Web pages. Nearly all of these are deciders – perhaps because they work with volumes of choices so huge that only computers can handle them. Helper systems aren’t much use in those situations.
- All kinds of systems for data. This is the most populated area, with roughly half the categories and half the total vendors. It's also the group with the most vendors I didn’t include – for example, there are probably 100 social media monitoring systems alone, most of which use at least some basic machine intelligence for language processing. This group is about evenly split between helpers and deciders, reflecting the variety and complexity of data-related tasks. One reason this group is so large is that many of the applications, such as data extraction and predictive model building, are also used for purposes outside of marketing.
I’ll draw some other lessons from this chart in my MarTech talk. You can still join us by registering here. In the meantime, I hope this chart helps you realize the scope of machine intelligence applications in marketing today and inspires you to explore more deeply how they can help in your own work.
Sunday, February 07, 2016
Marketing attribution systems: a quick look at the options
I’ve seen a lot of attribution vendors recently. If you're a regular reader here, you saw my reviews of Claritix (last week) and BrightFunnel (in December). Last week caught up with Jeff Winsper of Black Ink, which I'll hopefully review before too long. Bizible also popped up recently although I don’t recall the occasion; possibly something related to their interesting survey on “pipeline marketing” and attribution methods.
My rational brain knows that there’s probably no reason for this flurry of sightings beyond pure coincidence. But it’s human to see patterns where they don’t exist, so I did find myself wondering if attribution is becoming a hot topic. I can easily come up with a good story to explain it: marketing technology has reached a new maturity stage where the data needed for good attribution is now readily available, the cost of processing that data has fallen far enough to make it practical, and the need has reached a tipping point as the complexity of marketing has grown. So, clearly, 2016 will be The Year of Attribution (as Anna Bager and Joe Laszlo of the Internet Advertising Bureau have already suggested).
Or not. Sometimes random is just random. But now that this is on my mind, I've taken a look at the larger attribution landscape. Quick searches for "attribution" on G2 Crowd and TrustRadius turned up lists of 29 and 17 vendors, respectively – neither including Brightfunnel or Claritix, incidentally. A closer look found that 13 appeared on both sites, that each site listed several relevant vendors that the other missed, and that both sites listed multiple vendors that were not really relevant. For what it's worth, eight vendors of the 13 vendors listed on both sites were all bona fide attribution systems -- which I loosely define to mean they assign fractions of revenue to different marketing campaigns. I wouldn't draw any grand conclusions from the differences in coverage on G2 Crowd and TrustRadius, except to offer the obvious advice to check both (and probably some of the other review sites or vendor landscapes) to assemble a reasonably complete set of options.
I've presented the vendors listed in the two review sites below, grouping them based on which site included them and whether I qualified them as relevant to a quest for an attribution vendor. I've also added a few notes based on the closer look I took at each system in order to classify it. The main questions I asked were:
Attribution Systems
G2 Crowd and TrustRadius
G2 Crowd and TrustRadius
G2 Crowd only
My rational brain knows that there’s probably no reason for this flurry of sightings beyond pure coincidence. But it’s human to see patterns where they don’t exist, so I did find myself wondering if attribution is becoming a hot topic. I can easily come up with a good story to explain it: marketing technology has reached a new maturity stage where the data needed for good attribution is now readily available, the cost of processing that data has fallen far enough to make it practical, and the need has reached a tipping point as the complexity of marketing has grown. So, clearly, 2016 will be The Year of Attribution (as Anna Bager and Joe Laszlo of the Internet Advertising Bureau have already suggested).
Or not. Sometimes random is just random. But now that this is on my mind, I've taken a look at the larger attribution landscape. Quick searches for "attribution" on G2 Crowd and TrustRadius turned up lists of 29 and 17 vendors, respectively – neither including Brightfunnel or Claritix, incidentally. A closer look found that 13 appeared on both sites, that each site listed several relevant vendors that the other missed, and that both sites listed multiple vendors that were not really relevant. For what it's worth, eight vendors of the 13 vendors listed on both sites were all bona fide attribution systems -- which I loosely define to mean they assign fractions of revenue to different marketing campaigns. I wouldn't draw any grand conclusions from the differences in coverage on G2 Crowd and TrustRadius, except to offer the obvious advice to check both (and probably some of the other review sites or vendor landscapes) to assemble a reasonably complete set of options.
I've presented the vendors listed in the two review sites below, grouping them based on which site included them and whether I qualified them as relevant to a quest for an attribution vendor. I've also added a few notes based on the closer look I took at each system in order to classify it. The main questions I asked were:
- Does the system capture individual-level data, not just results by channel or campaign? You need the individual data to know who saw which messages and who ended up making a purchase. Those are the raw inputs needed for any attempt at estimating the impact of individual messages on the final result.
- Does the system capture offline as well as online messages? You need both to understand all influences on results. This question disqualified a few vendors that look only at online interactions. In practice, most vendors can incorporate whatever data you provide them, so if you have offline data, they can use it. TV is a special case because marketers don't usually know whether a specific individual saw a particular TV message, so TV is incorporated into attribution models using more general correlations.
- How does the vendor do the attribution calculations? Nearly all the vendors use what I've labeled an "algorithmic" approach, meaning they perform some sort of statistical analysis to estimate the attributed values. The main alternative is a "fractional" method that applies user-assigned weights, typically based on position in the buying sequence and/or the channel that delivered the message. The algorithmic approach is certainly preferred by most marketers, since it is based in actual data rather than marketers' (often inaccurate) assumptions. But algorithmic methods need a lot of data, so B2B marketers often use fractional methods as a more practical alternative. It's no accident that the only B2B specialist listed here, Bizible, is the only company that uses a fractional method, as do B2B specialists BrightFunnel and Claritix. It's also important to note that the technical details of the algorithmic methods differ greatly from vendor to vendor, and of course each vendor is convinced that their method is by far the best approach.
- Does the vendor provide marketing mix models? These resemble attribution except they work at the channel level and are not based on individual data. Classic marketing mix models instead look at promotion expense by channel by market (usually a geographic region, sometimes a demographic or other segment) and find correlations over time between spending levels and sales. Although mix models and algorithmic attribution use different techniques and data, several vendors do both and have connected them in some fashion.
- Does the vendor create optimal media plans? I'm defining these broadly to include any type of recommendation that uses the attribution model to suggest how users should reallocate their marketing spend at the channel or campaign level. Systems may do this at different levels of detail, with different levels of sophistication in the optimization, and with different degrees of integration to media buying systems.
Attribution Systems
G2 Crowd and TrustRadius
- Abakus: individual data; online and offline; algorithmic; optimal media plans
- Bizible: individual data; online and offline; fractional; merges marketing automation plus CRM data; B2B
- C3 Metrics: individual data; online and TV; algorithmic; optimal media plans
- Conversion Logic: individual data; online and TV; algorithmic;optimal media plans
- Convertro: individual data; online and offline; algorithmic; mix model; optimal media plans; owned by AOL
- MarketShare DecisionCloud: individual data; online and offline; algorithmic; mix models; optimal media plans; owned by Neustar
- Rakuten Attribution: individual data; online only; algorithmic; optimal media plans; formerly DC Storm, acquired by Rakuten marketing services agency in 2014
- Visual IQ: individual data; online and offline; algorithmic; optimal media plans
- BlackInk: individual data; online and offline; algorithmic; provides customer, marketing & sales analytics
- Kvantum Inc.: individual data; online and offline; algorithmic; mix models; optimal media plans
- Marketing Evolution: individual data; online and offline; algorithmic; mix model; optimal media plans
- OptimaHub MediaAttribution individual data; online and offline; attribution method not clear; data analytics agency with tag management, data collection, and analytics solutions
- Adometry: individual data; online and offline; algorithmic; mix models; optimal media plans; owned by Google
- ThinkVine: individual data; online and offline; algorithmic; mix models; optimal media plans; uses agent-based and other models
- Optimine: individual data; online and offline; algorithmic; optimal media plans
G2 Crowd and TrustRadius
- AdGear Advertiser: full stack advertising platform inc. ad serving, bidding, exchange technology
- DialogTech: tracks inbound phone calls
- Google Analytics Premium: ad data analytics including algorithmic attribution
- Invoca tracks inbound phone calls
- Telmetrics: tracks inbound phone calls
G2 Crowd only
- Adinton: Adwords bid optimization and attribution; uses Google Analytics for fractional attribution
- Blueshift Labs: real-time segmentation and content recommendations; individual data but apparently no attribution
- IBM Digital Analytics Impression Attribution: individual data; online only; shows influence (not clear has fractional or algorithmic attribution); based on Coremetrics
- LIVE: for clients of WPP group; does algorithmic attribution and optimization
- Marchex: tracks inbound phone calls
- Pathmatics: digital ad intelligence; apparently no attribution
- Sizmek: online ad management; provides attribution through alliance with Abakus
- Sparkfly: retail specialist; individual data; focus on connecting digital and POS data; campaign-level attribution but apparently not fractional or algorithmic
- Sylvan: financial services software; no marketing attribution
- TagCommander: tag managemenet system; real-time marketing hub with individual profiles and cross-channel data; custom fractional attribution formulas
- TradeTracker: affiliate marketing network
- Zeta Interative ZX: digital marketing agency offering DMP, database, engagement and related attribution; mix of tech and services
Tuesday, February 02, 2016
Claritix Assembles Marketing Data for Analysis: Maybe That's Enough
Most of the work in any marketing analytics project is integrating data from multiple systems. Claritix carries this insight to one logical conclusion by offering a system that does data assembly, basic reporting, and little else. No fancy attribution methodologies or custom journey maps here (although they’re on the way). I’m not fully convinced this is enough to justify using Claritix but am open to the possibility. Here’s a deeper look.
As I just said, Claritix’s chief function is assembling customer data from multiple sources. The system has prebuilt connectors to import data from popular vendors including Salesforce.com, Marketo, Hubspot, SAP, SugarCRM, and Facebook. It can connect with others through standard APIs. The imported data is loaded into MongoDB, a NoSQL database that offers great flexibility and ease of deployment. Claritix applies sophisticated algorithms to cleans the data and match contacts based on similarity. It also uses matches created elsewhere such as lead IDs used to synchronize CRM and marketing automation data or cookie IDs imported from Google Analytics. The matching happens at both the contact and account level. Imported data includes contacts, funnel stages, campaigns, channels, revenue, and content.
Users can access this data through dashboards, charts, and views. There are different dashboards for the main data types (campaigns, funnel stages, channels, etc.). These provide basic information such as impressions, engagements, visits, deals and revenue by campaign, or sources, stages, conversion rates, and average duration by funnel stage. The specific measures depend on the data type. Users can drill into details down to the contact level. Views can show results for user-defined segments.
Claritix also lets users assemble information into binders, which are contain pages that are snapshots of dashboards, charts, and notes. These can be exported to PDF or slides or viewed directly within Claritix. Binders can update themselves at regular intervals. Collaboration features let users attach virtual “sticky notes” to screen images and share these via Slack or Claritix’s own communication channels.
So far as I know, that’s pretty much all that the system does. There is no capability, for example, to write the assembled data back to source systems for their own use. Claritix tells me this has been quite sufficient for their initial clients, who have liked the fact that set-up is virtually all automated or handled by the vendor. This has let them assemble data across multiple systems in ways that would otherwise have been impossible or hugely expensive. Certainly price is an advantage: Claritix starts at $1,000 per month for up to 10,000 contacts in the database, with the cost per contact decreasing for higher volumes. A system with more advanced reporting, such as Brightfunnel (which I reviewed in December and has been a consulting client) starts at $3,000 per month or higher. Still, you have to decide whether you’ll need the features that Claritix is missing; if so, you’ll end up missing many of the beneifts that good marketing measurement provides. As Captain Planet used to say, the power is yours.
As I just said, Claritix’s chief function is assembling customer data from multiple sources. The system has prebuilt connectors to import data from popular vendors including Salesforce.com, Marketo, Hubspot, SAP, SugarCRM, and Facebook. It can connect with others through standard APIs. The imported data is loaded into MongoDB, a NoSQL database that offers great flexibility and ease of deployment. Claritix applies sophisticated algorithms to cleans the data and match contacts based on similarity. It also uses matches created elsewhere such as lead IDs used to synchronize CRM and marketing automation data or cookie IDs imported from Google Analytics. The matching happens at both the contact and account level. Imported data includes contacts, funnel stages, campaigns, channels, revenue, and content.
Users can access this data through dashboards, charts, and views. There are different dashboards for the main data types (campaigns, funnel stages, channels, etc.). These provide basic information such as impressions, engagements, visits, deals and revenue by campaign, or sources, stages, conversion rates, and average duration by funnel stage. The specific measures depend on the data type. Users can drill into details down to the contact level. Views can show results for user-defined segments.
Claritix also lets users assemble information into binders, which are contain pages that are snapshots of dashboards, charts, and notes. These can be exported to PDF or slides or viewed directly within Claritix. Binders can update themselves at regular intervals. Collaboration features let users attach virtual “sticky notes” to screen images and share these via Slack or Claritix’s own communication channels.
So far as I know, that’s pretty much all that the system does. There is no capability, for example, to write the assembled data back to source systems for their own use. Claritix tells me this has been quite sufficient for their initial clients, who have liked the fact that set-up is virtually all automated or handled by the vendor. This has let them assemble data across multiple systems in ways that would otherwise have been impossible or hugely expensive. Certainly price is an advantage: Claritix starts at $1,000 per month for up to 10,000 contacts in the database, with the cost per contact decreasing for higher volumes. A system with more advanced reporting, such as Brightfunnel (which I reviewed in December and has been a consulting client) starts at $3,000 per month or higher. Still, you have to decide whether you’ll need the features that Claritix is missing; if so, you’ll end up missing many of the beneifts that good marketing measurement provides. As Captain Planet used to say, the power is yours.