Tuesday, December 22, 2015

Brightfunnel Gives B2B Marketers Self-Service Revenue Attribution

Marketing without revenue attribution is like playing golf without keeping score: it might be fun but you can’t tell whether you’re doing a good job. But while keeping score in golf is simple, figuring out the impact of marketing programs is quite tough. In fact, B2B marketers face several challenges on the road to perfect attribution.  The simplest is just connecting marketing leads to closed sales, which is an issue because the data in sales systems is often incomplete.  A higher level ties specific marketing programs to individual leads, and through them to accounts and deals. The most advanced efforts estimate the relative impact of different marketing programs on the final result.  The problems must be solved in sequence: you must connect leads to revenue before you can connect marketing programs to revenue, and must connect all programs to revenue before you can start to allocate credit among them.

Brightfunnel compares results of different attribution methods

Most marketers struggle to get past the first level. There wouldn’t be a problem if sales people religiously associated every lead with the right account. But this doesn’t always happen for many reasons. So marketers must either accept that they’ll miss some connections, do laborious manual research to make the right matches, or rely on specialized software to do the work.

This is where Brightfunnel comes in. Brightfunnel reads lead, account, and opportunity data from Salesforce.com and supplies missing connections based on things like company name. Since Salesforce.com can also capture lead source (i.e., original marketing program), Brightfunnel can build a complete chain linking marketing programs to leads to accounts to opportunities. The system also has connectors to bring in data from Oracle Eloqua and Marketo, marketing automation, which will often include marketing programs and leads that never made it into Salesforce. But Brightfunnel says that most clients work with Salesforc data alone.

Making connections is certainly important, but Brightfunnel also provides tools to use the resulting information. Marketers can analyze results by marketing program, time period, customer segment, or other variables. They can compare performance over time, compare specific programs against an average, and see top campaigns by lead source. Because the imported opportunity data includes sales stage, reports can also track movement through the sales funnel, calculating conversion rates and velocity (time to move from one stage to the next). The system can use this to forecast the value and timing of future sales from deals currently in the pipeline.

What about that third level of attribution, splitting revenue from a single sale among different marketing programs? Brightfunnel offers two varieties of multitouch attribution: one where credit is shared evenly among all programs that touched a lead, and one where credit is split according to a fixed formula of 40% to the first touch, 20% to middle touches, and 40% to the final touch. Brightfunnel can also show first-touch and last-touch attribution, which attribute all revenue to the first or last touch, respectively.

Attribution aficionados will recognize that none of these is a fully satisfactory approach. The gold standard in attribution is advanced statistical methods that estimate the true incremental impact of each program on each lead. Brightfunnel is working on such a method but hasn’t released it yet. In the meantime, the simpler approaches give some useful insights – so long as you don’t forget they are not wholly accurate.

The value of Brightfunnel is less in advanced analytics than in the fact that it does the basic data assembly and lets marketers analyze data for themselves.  Without a tool like Brightfunnel, detailed analysis often requires technical skill and tools that few marketers have available.  .

Brightfunnel was introduced in 2014 and has something under 100 clients. Pricing runs from $35,000 to $80,000 per year based on system modules and number of users. The amount of data doesn’t matter. Clients are mostly mid-sized tech companies – the usual early adopters for this sort of thing. The company raised $6 million in Series A funding in October 2015.

Wednesday, December 16, 2015

Future Marketing: Will Machines Take Over Half the Consumer Economy?

‘Tis the season for predictions. I’m not going to plague you with any new ones right now, but did want to expand a bit on the long-term vision I’ve been talking about in speeches and described briefly last July as “robotech”. The gist of this is that people will increasingly delegate day-to-day decisions to computers, meaning that most purchases will be based on machines selling to other machines.  (If you want a real-world example, think how search engine optimization already boils down to “selling” content to the Google ranking algorithms).  In this world, consumers still have choices but what they’re deciding is which machine to trust – in exactly the same way that you decide whether to let Google or Bing or something else be your primary search engine.

The key word in that sentence is “trust”.  People won’t want to double-check each action by the agents they delegate to buy their groceries, pick their restaurants, book their hotel rooms, arrange their transportation, and do other boring daily tasks. The chart below shows in more detail how I think current trends lead to this conclusion; I won’t bore you by walking through it step-by-step.

One question worth asking is, how much of the economy is likely to be affected by this change? After all, nearly all B2B purchases are already part of a larger relationship rather than isolated transactions. On the consumer side, large sectors like banking, insurance, health care, and housing are also governed by long-term contracts.

The table below shows a crude attempt to find an answer.  Using Census data, classified each industry as B2B or B2C, and then split B2C between sectors that are already purchased through long-term relationships and those that will move to such relationships in the future. (I did actually have a category for those that will remain transactional, but that column ended up blank.) Industries that sell to both business and consumers were split 50/50.  I told you this was crude.

As you see, I decided that about 55% of the U.S. economy is B2B. The remaining portion is split evenly between sectors that are already sold through long-term contracts and those that will make that transition in the future. The bulk of the change will happen in the retail sector, where I think nearly every purchase will be based either on a direct subscription – such as contracting with a dealer to service your car rather than buying each repair individually – or an indirect subscription – such as having an automated travel agent pick the best airline, hotel, rental car, and restaurants for each trip. You might question some of my choices, but let me point out that even the clothing industry – where people theoretically want to make individual choices – is already seeing subscription business models where companies send products they think the consumer might like and the consumer can then keep what she wants. 

These figures are intended to give some weight to my otherwise vague assertion about the shift from transactional to relationship buying. If literally half the consumer economy is at stake (and the other half has already made the transition), that is surely worth paying attention.

Wednesday, December 02, 2015

Automated Marketing Campaigns: An Immodest Proposal

I wrote last June about replacing traditional multi-step campaigns with a system that tracks customers through journey stages and executes short sequences of actions, called “plays”, at each stage.  The goal was to approach the perfect campaign design of “do the right thing, wait, and do the right thing again”.

I still think approach makes sense but it suffers from a major flaw: someone has to define all those stages and all those plays. This limits the number of actions available since it takes considerable human time, effort, and insight to create each stage and play. Ideally, you’d let machines evolve the stages and plays autonomously. But while it’s easy for conventional predictive models to find the “next best action” in a particular situation, it’s much harder to find the best sequence of actions. This is why you can find dozens of automated products to personalize Web sites but none to automate multi-step campaign design.

The problem with multi-step campaigns is the number of options to test.  These have to cover all possible actions in all possible sequences at all possible time intervals. Few companies have enough customer activity to test all the possibilities, and, even if they did, it would take unacceptably long to stumble upon the best combinations and cost unacceptable amounts of lost revenue from customers in losing test cells.  In any case, available actions and customer behaviors are constantly changing, so the best approach will change over time – meaning the system would need to be constantly retesting, with all the resulting costs.

I’ve recently begun to imagine what I think is a solution. Let’s start with the problem that’s already solved, of finding the single next best action. You can conceive of my perfect campaign as a collection of next best actions. But a solution that merely executed the next best action for each customer every day would almost surely produce too many messages. One solution is a model that predicts the value* of each potential action, rather than simply ranking the actions against each other. This lets you set a minimum value for outbound messages. On days when no action meets this threshold, the system would simply do nothing. A still better approach is to explicitly consider “no action” as an option to test and build a model that gives it a value – presumably, the value being higher response to future promotions. Now you have a system that is organically evolving the timing of multi-step campaigns – and, better still, adapting that timing to behavior of each individual.

But what about sequences of related actions (i.e.,“plays”)? Let’s assume for a moment that the action sequences already exist, even if humans had to create them. This turns out to be a non-issue: if the best action to take with a customer is the second step in a sequence, the system should find that and choose it.  If some other action would be more productive, we’d want to system to pick that anyway. The only caveat is the predictions must take into account previous actions, so any benefit from being part of a sequence is properly reflected in the value calculations. But a good model should consider previous actions anyway, whether or not they’re part of a formal sequence. At most, marketers might want to stop customers from receiving messages out of order.  This is easy enough to design – it just becomes part of the eligibility rules that limit the actions available for a given customer.  Such rules must exist for any number of reasons, such as location, products owned, or credit limit, so adding sequence constraints is little additional work.  In practice, the optimal sequence for different customers is likely to be different, so imposing a fixed sequence is often actively harmful.

So far so good, but we haven’t really solved the problem of too many combinations. This requires another level of abstraction to reduce the number of options that need to be tested. When it comes to timing, initial tests of waiting for random intervals between actions should pretty quickly uncover when it’s too soon for any new action and when it’s too long to wait. This can be abstracted from results across all actions, so the learning should come quite quickly. Once reliable estimates are available, they can be used in prediction models for all possible actions. Future tests can be then limited to refining the timing within the standard range, with only a few tests outside the range to make sure nothing has changed.

The same approach could reduce other types of testing to keep the number of combinations within reason. For example, actions can be classified by broad types (cross sell, upsell, retention, winback, price level, product line, customer support, education, etc.) to quickly understand which types of actions are most productive in given situations. Testing can then focus on the top-ranked alternatives. This is relatively straightforward once actions are properly tagged with such attributes – or machine learning discovers the relevant attributes without tagging. Again, the system will still test some occasional outliers to find any new patterns that might appear.

Incidentally, this approach also helps to solve the problem of sequence creation. Category-level predictions would show when a customer is likely to respond to another action within a given category. If the system is consistently running out of fresh actions in one category, that’s a strong hint that more should be created. Thus, a sequence (or play) is born.

So – we’ve found a way for machines to design multi-step sequences and to reduce testing to a reasonable number of combinations. You might be thinking this is interesting but impractical because it requires running hundreds or thousands of models against each customer every day or possibly more often. But it turns out that’s not necessary. If we return to our concept of a value threshold and assume that time plays a role in every model score, then it’s possible to calculate in advance when the value of each action for each customer will reach the threshold. The system can then find whichever action will reach the threshold first and schedule that action to execute at that time. No further calculations are needed unless the model changes or you get new information about the customer – most likely because they did something. Of course, you’d want to recalculate the scores at that time anyway. In most businesses, such changes happen relatively rarely, so the number of customers with recalculated scores on any given day is a tiny fraction of the full base.

None of the concepts I’ve introduced here – value thresholds, explicitly testing “no action”, sharing attributes across models, and precalculating future model scores – is especially advanced. I’d be shocked if many developers hadn’t already started to use them. But I’ve yet to see a vendor pull them together into a single product or even hint they were moving in this direction. So this post is my little holiday gift – and challenge – to you, Martech Industry: it’s time to use these ideas, or better ones of your own, to reinvent campaign management as an automated system that lets marketers focus again on customers, not technology.

* Value calculation is its own challenge.  But marketers need to pick some value measure no matter how they build their campaigns, so lack of a perfect value measure isn't a reason to reject the rest of this argument.

Saturday, November 28, 2015

Model Factory from Modern Analytics Offers High Scale Predictive Modeling for Marketers

Remember when I asked two weeks ago whether predictive models are becoming a commodity? Here’s another log for that fire: Model Factory from Modern Analytics, which promises as many models as you want for a flat fee starting at $5,000 per month. You heard that right: an all-you-can eat, fixed-price buffet for predictive models. Can free toasters* and a loyalty card be far behind?

Of course, some buffets sell better food than others. So far as I can tell, the models produced by Model Factory are quite good. But buffets also imply eating more than you should. As Model Factory’s developers correctly point out, many organizations could healthily consume a nearly unlimited number of models. Model Factory is targeted at firms whose large needs can’t be met at an acceptable cost by traditional modeling technologies. So the better analogy might be Green Revolution scientists increasing food production to feed the starving masses.

In any case, the real questions are what Model Factory does and how. The "what" is pretty simple: it builds a large number of models in a fully automated fashion. The "how" is more complicated.  Model Factory starts by importing data in known structures, so users still need to set up the initial inputs and do things like associate customer identities from different systems. Modern Analytics has staff to help with that, but it can still be a substantial task. The good news is that set-up is done only when you’re defining the modeling process or adding new sources, so the manual work isn't repeated each time a model is built or records are scored. Better still, Modern Analytics has experience connecting to APIs of common data sources such as Salesforce.com, so a new feed from a familiar source usually takes just a few hours to set up.  Model Factory stores the loaded data in its own database. This means models can use historical data without reloading all data from scratch before each update.

Once the data flow is established, users specify the file segments to model against and the types of predictions.  The predictions usually describe likelihood of actions such as purchasing a specific product but they could be something else. Again there’s some initial skilled work to define the model parameters but the process then runs automatically. During a typical run, Model Factory evaluates the input data, does data prep such as treating outliers and transforming variables, builds new models, checks each model for usable results, and scores customer records for models that pass.

The quality check is arguably the most important part of the process, because that’s what prevents Model Factory from blindly producing bad scores due to inadequate data, quality problems, or other unanticipated issues. Model Factory flags bad models – measured by traditional statistical methods like the c-score – and gives users some information their results. It’s then up to the human experts to dig further and either accept the model as is or make whatever fixes are required. Scores from passing models are pushed to client systems in files, API calls, or whatever else has been set up during implementation.

If you’ve been around the predictive modeling industry for a while, you know that automated model development has been available in different forms for long time. Indeed, Model Factory's own core engine was introduced five years ago. What made Model Factory special, then and now, is automating the end-to-end process at high scale.  How high?  There's no simple answer because the company can adjust the hardware to provide whatever performance a client requires.  In addition to hardware, performance is driven by types of models, number of records, and size of each record.  A six-processor machine working with 100,000 large records might take 2 to 40 minutes to build each model and score all records in 30 seconds per model.**

Model Factor now runs as a cloud based service, which lets users easily upgrade hardware to meet larger loads. A new interface, now in beta, lets end-users manage the modeling process and view the results.  Even with the interface, tasks such as exploring poorly performing models take serious data science skills.So it would still be wrong to think of Model Factory as a tool for the unsophisticated. Instead, consider Model Factory as a force multiplier for companies that know what they’re doing and how to do it, but can’t execute the volumes required.

Pricing for Model Factory starts at $5,000 per month for modest hardware (4 vCPU/8Gb RAM machine with 500 Gb fast storage).  Set-up tasks are covered by an implementation fee, typically around $10,000 to $20,000. Not every company will have the appetite for this sort of system, but those that do may fine Model Factory a welcome addition to their marketing technology smorgasbord.


* For the youngsters: banks used to give away free toasters to attract new customers. This was back, oh, during the 1960’s. I wasn’t there but have heard the stories.

** The exact example provided by the company was: On a 6 vCPU, 64Gb RAM machine, building 500 models on between 20K and 178K records with up to 20,000 variables per record takes an average between 2 and 40 minutes to build each model and 30 seconds per model to score all records.  This hardware configuration would cost $12,750 per month.

Thursday, November 19, 2015

The Big Willow Links Intent Data to Devices to Companies...Another Flavor of Account Based Marketing

With interest in account based marketing (ABM) skyrocketing past even hot topics like intent data and predictive marketing, it’s no surprise to find debates over the true meaning of the term. I recently had a discussion along those lines with Charlie Tarzian and Neil Passero of The Big Willow, who argued that account based marketing must extend beyond reaching target accounts to include messages based on location and intent. As you might suspect, that is exactly what The Big Willow does.

What The Big Willow does with intent data is interesting whether it’s the One True ABM or not. The company tracks which devices are consuming what content, associates the content with intent, and then associates the devices - as much as it can - to companies.

The Big Willow uses data from media it serves directly and from the nightly feeds that ad networks and publishers send to media buyers.  This tells it which devices saw which content.  The system relates the content to intent by parsing it for keywords and phrases related to The Big Willow clients' products and services.  Devices are associated with companies using reverse IP lookup for IP addresses registered directly to a specific business.  If the IP address belongs to a service provider like Verizon or Comcast, The Big Willow applies a proprietary method that finds the device location based on IP address and infers a match with businesses near that location. That’s far from perfect but can work if there is just one business in a particular industry near that location. What makes this worth the trouble is it can double the number of devices linked to target companies.

The location-based approach clearly has its limits.  But it’s important to put those aside and go back to the fact that The Big Willow is tracking consumption by devices, not cookies.  This matters because cookies are increasingly ineffective in an era of mobile devices and frequent cookie deletion.  It’s also important to bear in mind that The Big Willow is storing consumption of all content for all devices it sees, meaning it can analyze past behavior without advance preparation. This lets it immediately identify prospects who have shown interest in a new client’s industry.

The Big Willow uses this historical data to examine a client’s current marketing automation and CRM databases, distinguishing companies showing intent from those that are inactive, and also finding active companies that are not already in the corporate database. This analysis takes about two weeks to complete. The Big Willow can then target advertising at those audiences, including Web display ads to companies that have not yet visited the client’s own Web site. This extends beyond the usual ABM retargeting of site visitors. Of course, since The Big Willow is capturing intent, it can tailor the ads to the buying stage of each company.

As a final trick, The Big Willow can also track which devices have seen the client’s ads and then use a pixel on the client’s Web site to find which of those devices eventually make a visit. This captures many more connections than the traditional approach of tracking visitors who have clicked on a company ad – which the vast majority of visitors do not.

In short, The Big Willow provides an interesting option for business marketers who want to do intent-based account targeting. It probably won’t be the only tool anyone uses, but it is worth considering it as something to add to your mix. Pricing ranges from $10,000 to $20,000 per month based on specific deliverables and services. The company was founded in 2011 and has dozens of clients.

Monday, November 09, 2015

Predictive Marketing Vendors Look Beyond Lead Scores

It’s clear that 2015 has been the breakout year for predictive analytics in marketing, with at least $242 million in new funding, compared with $376 million in all prior years combined.

But is it possible that predictive is already approaching commodity status? You might think so based on the emergence of open source machine learning like H20 and Google’s announcement today that is it releasing a open source version of its TensorFlow artificial intelligence engine.

Maybe I shouldn't be surprised that predictive marketing vendors seem to have anticipated this.  They are, after all, experts at seeing the future. At least, recent announcements make clear that they’re all looking to move past simple model building.  I wrote last month about Everstring’s expansion to the world of intent data and account based marketing. The past week brought three more announcements about predictive vendors expanding beyond lead scoring.

Radius kicked off the sequence on November 3 with its announcement of Performance, a dashboard that gives conversion reports on performance of Radius-sourced prospects. What’s significant here is less the reporting than that Radius is moving beyond analytics to give clients lists of potential customers. In particular, its finding new market segments that clients might enter – something different from simply scoring leads that clients present to it or even from finding individual prospects that look like current customers. This isn’t a new service for Radius but it’s one that only some of the other predictive modeling vendors provide.

Radius also recently announced a very nice free offering, the CMO Insights Report.  Companies willing to share their Salesforce CRM data can get a report assessing the quality of their CRM records, listing the top five data elements that identify high-value prospects, and suggesting five market segments they might pursue. This is based on combining the CRM data with Radius’ own massive database of information about businesses. It takes zero effort on the marketer’s part and the answer comes back in 24 hours. Needless to say, it’s a great way for Radius to show off its highly automated model building and the extent of its data. I imagine that some companies will be reluctant to sign into Salesforce via the Radius Web site, but if you can get over that hurdle, it’s worth a look.

Infer upped the ante on November 5 with its Prospect Management Platform. This also extends beyond lead scoring to provide access to Infer’s own data about businesses (which it had previously kept to itself) and do several types of artificial intelligence-based recommendations. Like Radius, Infer works by importing the client's CRM data and enhancing it with Infer's information.  The system also has connectors to import data from marketing automation and Google Analytics.  It then finds prospect segments with above-average sales results, segments that are receiving too much or too little attention from the sales team, and segments with significant changes in other key performance indicators.

Like the Pirate Code, Infer's recommendations are more guidelines than actual rules: it’s up to users to review the findings and decide what, if anything, to do with them. Users who create segments can then have the system automatically track movement of individuals into and out of segments and define actions to take when this occurs. The actions can include sending an alert or creating a task in the CRM system, assigning the lead to a nurture campaign in marketing automation, or using an API to trigger another external action. Infer plans to also recommend the best offer for each group, although this is not in the last week’s release – which is available today to current clients and will be opened to non-customers in early 2016. That last option is an interesting extension in itself, meaning Infer could be used by marketers who have no interest in lead scoring.

Mintigo’s news came today. It included some nice enhancements including a new user interface, account-based lead scores, and lists of high-potential net new accounts. But the really exciting bit was preannouncement of Predictive Campaigns, which is just entering private beta.  This is Mintigo’s attempt to build an automated campaign engine that picks the best treatment for each customer in each situation.

I've written about this sort of thing many times, as recently as this July and as far back as 2013. Mintigo’s approach is to first instrument the client’s marketing efforts across all channels to track promotion response; then run automated a/b tests to see how each offer performs in different channels for different prospects; use the results to build automated, self-adjusting predictive response models; and then set up a process to automatically select the best offer, channel, and message timing for each customer, execute it, wait for response, and repeat the cycle. Execution happens by setting up separate marketing automation campaigns for the different offers.  These campaigns execute Mintigo’s instructions for the right channel and timing for each prospect, capture the response, and alert Mintigo to start again. The initial deployment is limited to Oracle Eloqua, which had the best APIs for the purpose, although Mintigo plans to add other marketing automation partners in the future.

Conceptually, this is exactly the model I have proposed of “do the right thing, wait, and do the right thing again”. Mintigo’s actual implementation is considerably messier than that, but such is the price of working in the real world. There are still nuances to work out, such as optimizing for long-term value rather than immediate response, incorporating multi-step campaigns, finding efficient testing strategies, automating offer creation. And of course this is just a pre-beta announcement. But, it’s still exciting to see progress past the traditional limits of predefined campaign flows. And, like the other developments this week, it’s a move well beyond basic lead scoring.

Thursday, November 05, 2015

Teradata Plans to Sell Its $200 Million Marketing Application Business. Any Takers?

Teradata today announced it plans to sell its Marketing Applications business.  I’ll drop the usual analyst pose of omniscience to admit I didn’t see this coming. It’s only three weeks since Teradata expanded its marketing suite by buying a new Data Management Platform – a move I felt made great sense. They also briefed me at that time on a slew of updates to their other marketing products, demonstrating continued forward movement. There was no clue of a pending sale, although I strongly suspect the people briefing me had no idea it was coming.

According to financial statements within the Teradata announcement, Marketing Applications revenue was down about 9% this year, which is surprising in a generally strong martech market but in line with the rest of Teradata’s business. Teradata told me separately that their marketing cloud business grew 22% year-on-year this quarter, suggesting that the decline came in the older, on-premise products and/or related services. As you may know, Teradata’s marketing applications business was a mashup of Teradata's original, on-premise marketing product, based on the Ceres purchase made 15 years ago and now called Customer Interaction Manger (CIM); the Aprimo cloud-based systems acquired for $525 million in 2010; and several more recent cloud-based acquisitions, notably eCircle email. The Aprimo group was dominant in the years immediately following the acquisition but control shifted back to the older Teradata team more recently. One bit of evidence: the Aprimo brand was dropped in 2013. 

Since the original version of this post was written, I've been told by unofficial but reliable sources that Teradata management has said it intends to keep the on-premise CIM business and sell everything else.  This makes sense to some degree, since CIM is one of very few enterprise-scale on-premise marketing automation systems.  IBM and SAS are really the only other major competitors here, although Oracle and SAP are also contenders. I don’t know how much of Teradata’s revenue comes from CIM or how many new licenses it has sold recently.  Based on the information presented above, the business may be shrinking.  But there’s definitely strong preference for on-premise marketing automation at many of the large enterprises who are Teradata's primary customers for its database and analytics products (which account for more than 90% of its revenue).  So keeping CIM may make sense just as a way to block competitors like IBM and SAS from using their own on-premise marketing automation systems to gain a foothold at Teradata accounts.  But it's really hard to imagine any new customers choosing CIM when Teradata has made clear it wants out of the marketing applications business.  Even current customers will have to wonder whether Teradata can be relied upon to keep CIM up to date.

So what happens now? Well, Marketing Applications is a $200 million business.  Even if CIM generates $50 million of that, which I doubt,  the remaining pieces make Teradata a major player in B2C marketing automation. (Point of reference: Salesforce.com reported $505 million revenue for its B2C marketing cloud in 2015.)   This suggests that someone will purchase the Teradata systems and continue to sell them. 

The question is who that buyer might be.  The big enterprise software companies already have their own systems, and CIM would probably the only piece any of them might want (if they wanted to add a stronger on-premise product).  It’s conceivable that a private equity firm will purchase the systems and run them more or less independently or combine them with other products – look at HGGC’s recent combination of StrongView and Selligent (in the mid-market) or Zeta Interactive’s purchase of eBay’s CRM systems. If CIM were part of the package, I'd argue that Marketo should buy it and gain true enterprise scale B2C technology while nearly doubling its revenue.  But without CIM, that doesn't make much sense.

Iterable Offers Mid-Size B2C Marketers Powerful Campaigns in Outbound Channels

As William Shakespeare never wrote, some systems are born with data, some achieve data, and some have data thrust upon them. What the Bard would have meant is that some systems are designed around a marketing database, some add a database later in their development, and some attach to external data. The difference matters because marketers are increasingly required to pick a collection of components that somehow work together to deliver integrated customer experiences. This means that marketers must first determine whether they're looking for a system to provide their primary marketing database (since you only need one of those), and then figure out which products fall into the right category.

Whether you need a system with its own database ultimately depends on whether you have an adequate database in place. Obviously the key word in that sentence is "adequate".  How that's defined depends on the situation: key variables include the number and types of data you need available, how quickly new data must be processed, whether source data is already coded with a common customer ID, and how you want other systems to access the data.

As I wrote last week, there are a handful of Customer Data Platforms (CDPs) that do nothing but build a database. Many more systems build a database as part of a larger package that also includes an operational function such as predictive modeling or campaign management. This offers an immediate benefit but it complicates the system choice since you have to judge both the database and the operational features. It’s also trickier in a more subtle way because some systems build a great database but don’t make it fully available to other products. That’s spelled s-i-l-o.

These musings are prompted by my attempt to come to assess Iterable, a product I generally like but find as slippery as one of Shakespeare’s cross-dressing heroines. Iterable definitely builds its own database, using the JSON API and Elasticsearch data store to manage pretty much any kind of data you might throw at it. This can happen in real time (yay!) or via batch file imports. The system even provides its own Javascript tag to post directly from Web pages and emails. It organizes the information into customer profiles that can include both static attributes and events such as transactions.  That’s pretty much what you want in your marketing database. Elasticsearch lets the system scale very nicely, returning queries on 100 million+ profiles in seconds. Yay again!

On the other hand, Iterable doesn’t let other systems query the data directly. Users can do analytics and build segments using Iterable’s own tools or export selected elements to other systems in a file.  They can also push data to other systems through integration with the Segment data hub.  So while Segment might be the core database supporting other marketing systems, Iterable will not.  Nor does Iterable do much in the way of identity association: new data must be coded with a customer ID to add it to a profile. This is a pretty common approach so it's not something to hold against Iterable in particular.  Just be aware that if you need to solve the association problem, you’ll have to look outside of Iterable for the answer.  Fortunately, there are plenty of other specialized systems to do this.

Perhaps Iterable provides so many operational functions that there's no need for other systems to access its data?  The answer depends on exactly what functions you need.  Iterable provides a flexible segmentation tool that can build static lists and can update dynamic lists in real time as new data is posted. This can be combined with exceptionally powerful multi-step workflows, including rarely-seen features such as converging paths (two nodes can point to the same destination) and parallel streams (the same customer can follow two paths out of the same node). It also supports more common, but still important, functions including filters, splits, a/b tests, waiting periods, API calls to external systems, and sending email, SMS, and push messages. One notably missing feature is predictive modeling to drive personalized messages, but Iterable recently set up an integration with BoomTrain to do this. Iterable still doesn’t offer Web site personalization although it might be able to support that indirectly through BoomTrain, Web hooks, or Segment.

Iterable includes content creation tools for its messaging channels – again, that's email, SMS, and push.  This means users must rely on third party software to create forms and landing pages.  Nearly all B2B marketing automation systems do have form and page builders, but Iterable is targeted primarily at mid-tier B2C marketers, who are less likely to expect them.  Iterable’s B2C focus is further clarified by its prebuilt integration with Magento for ecommerce and with Mixpanel and Google Analytics for mobile and Web analytics. The system also provides a preference center to capture customer permissions to receive messages in different channels – a feature that is essential in B2C, although certainly helpful in B2B as well.

So where does this leave us? Iterable is more powerful than a basic email system but not quite as rich as full-blown marketing automation, let alone an integrated marketing suite or cloud. Page tags, JSON feeds, and Webhooks make it especially good at collecting information, although it will need help with identity association to make full use of this data.  It builds powerful outbound campaigns in email, SMS, and mobile apps.  Ultimately, this makes it a good choice for mid-size B2C marketers who want to orchestrate outbound messages  but are less concerned about Web pages or other inbound channels. Marketers could also use Iterable as the outbound component of a more comprehensive solution with Segment or something similar at the core.

Iterable was founded in 2013 and first released its product about a year ago. It currently has more than 30 clients paying an average around $3,000 per month. List prices start much lower and some clients are much larger.

Thursday, October 29, 2015

Openprise Gives Marketers Easy(ish) Tool to Manage Their Data

When I first described Customer Data Platforms two and half years ago,  all the vendors offered an application such as predictive analytics or campaign management in addition to the "pure" CDP function of building the customer database.  Since then, some "pure" CDPs have emerged, notably among vendors with roots in Web page tag management – Tealium, Signal, and Ensighten (which just raised $53 million). Other data collection specialists include Segment.com, Aginity, Umbel, Lytics, NGData, and Woopra, although some of these do supplement database building with predictive model scores, segmentation, and/or event-based triggers.

Openprise falls roughly into this second category. It’s primarily used to set up data processing flows for data cleaning, matching, and lead routing. But it can also apply segment tags and send out alerts when specified conditions are met. What it doesn’t do is maintain a permanent customer database accessible to other systems for campaigns and execution. This means Openprise doesn’t meet the technical definition of a CDP. But Openprise could post data to such a database.  And since the essence of the CDP concept is letting marketers build the customer database for themselves, Openprise arguably provides the most important part of a CDP solution.

Current clients use Openprise in more modest ways, however.  Most are marketing and sales operations staff supporting Salesforce.com and Marketo who use Openprise to supplement the limited data management capabilities native to those systems. Openprise also integrates today with Google Apps and the Amazon Redshift database. Integrations with Oracle Eloqua, HubSpot and Salesforce Pardot are planned by end of this year. The Marketo integration reads only the lead object, although the activities object is being added.  The Salesforce integration reads leads, contacts, opportunities, campaigns and accounts and will add custom objects.

Openprise works by connecting data sources, which are typically lists but sometimes API feeds, to “pipelines” that contain a sequence of if/then rules. Each rule checks whether a record meets a set of conditions (the “if”) and executes specified actions on those that qualify (the “then”). The interface lets users set up the flows, rules, and actions without writing programming code or scripts, usually by completing templates made up of forms with drop-down lists of possible answers. For example, a complex condition such as “sum exceeds threshold” would have form with blanks where the user specifies the variable to sum, variable to group by, the comparison operator, threshold value, and time period. This still takes some highly structured thinking – it’s far from writing an English language sentence – but is well within the capabilities of anyone likely to be in charge of operating a marketing automation or CRM system.

Of course, the value of such a system depends on the actual actions it makes available. The two basic actions in Openprise are sending alerts and setting attribute values. Alerts can be based on complex rules and delivered via email or text message. Attribute values can be used to set segment tags, assign lead owners for routing, and cleanse data. Cleansing features include normalization to apply rules, standardize formats, and match against reference tables.  The system can also fill in missing values based on relationships such as inferring city and state from Zip code. Matching can apply fuzzy methods, use rules to handle near-matches, and set priorities when several possible matches are available. Parsing can scan a text block for keywords and extract them.

Openprise already has special features to standardize job titles and roles and is working on company name clean up. It plans to add connectors for Dun and Bradsteet, Zoominfo and Data.com to verify and enhance customer information.

Updated records can be returned to the original source or sent to a different destination.  The Amazon Redshift connector means Openprise could feed a data warehouse or CDP available to other analytic and execution systems. Users can assign access rights to different data sets and to different elements within a set. They can then have the system send file extracts of the appropriate data to different recipients, a feature often used to share data with channel partners. Most pipelines execute as batch processes, either on demand or on a user-specified schedule. Some can run in real time through API calls.

The system also provides some data analysis capabilities, including time series, ranking, pie charts, word frequency, calendars, time of day, and trend reports. These are used mostly to help assess data quality and to profile new inputs.

Openprise says new customers usually get about two hours of training, during which they map a couple of data sources and build a sample pipeline.  The vendor also provides training videos and “cookbooks” that show how to set up common processes such as lead cleansing and merging two lists.

Pricing of Openprise is based on data volume processed, not number of records. Users can run 50 MB per month without charge. Running 100 MB per month costs $100 and running 1 GB per month costs $1,000. There also a free trial.

Openprise was released in late September and had accrued more than 30 users by mid-October. It is available on Marketo LaunchPoint and will eventually be added to Salesforce AppExchange.

Friday, October 23, 2015

Why Time Is the Real Barrier to Marketing Technology Adoption and What To Do About It

I split my time this week between two conferences, Sailthru LIFT and Marketing Profs B2B Forum.  Both were well attended, well produced, and well worth while.  My personal highlights were:

- Sailthru introducing its next round of predictive modeling and personalization features and working to help users adopt them. As you probably don’t know, Sailthru automatically creates several scores on each customer record for things such as likelihood to purchase in the next week and likelihood to opt out of email. The company is making those available to guide list selection and content personalization for both email and Web pages.  One big focus at the conference was getting more clients to use them.

- Yours Truly presenting to the Sailthru attendees about building better data.  The thrust was that marketers know they need better data but still don’t give it priority. I tried to get them so excited with use cases – a.k.a. “business porn” – that they’d decide it was more important than other projects. If they wanted it badly enough, the theory went, they’d find the time and budget for the necessary technology and training. I probably shouldn’t admit this, but I was so determined to keep their attention that I resorted to a bar chart built entirely of kittens.  To download the deck, kittens and all, click here.

- Various experts at Marketing Profs talking (mostly over drinks) about the growth of Account Based Marketing. The consensus was that ABM is still in the early stages where people don’t agree on what’s included or how to evaluate results. Specific questions included whether ABM should deliver actual prospect names (at the risk of being measured solely on cost per lead); what measurements really do make sense (and whether marketers will pay for measurement separately from the ABM system); and how to extend ABM beyond display ad targeting. Or at least I think that’s what we discussed; the room was loud and drinks were free.

- Me (again) advising Marketing Profs attendees on avoiding common mistakes when selecting a marketing automation vendor.   My message here, repeated so many times it may have been annoying, was that users MUST MUST MUST define specific requirements and explore vendor features in detail to pick the right system. One epiphany was finding that nearly everyone in the room already had a marketing automation product in place – something that would not have been true two or three years ago.  These are knowledgeable buyers, which changes things completely.  (Click here for those slides which had no kittens but do include a nice unicorn.)

You may have noticed a common theme in these moments: trying to help marketers do things that are clearly in their interest but they're somehow avoiding. Making fuller use of predictive models, building a complete customer view, focusing on target accounts, and using relevant system selection criteria are all things marketers know they should do. Yet nearly all industry discussion is focused on proving their value once again, or – usually the next step – in explaining how to do it.

What's the real obstacle?  Surveys often show that budget, strategy, or technology are the barriers. (See ChiefMartec Scott Brinker's recent post for more on this topic.)   But when you ask marketers face to face about the obstacles, the reason that comes up is consistently lack of time. (My theory on the difference is that people pressed for time don’t answer surveys.) And time, as I hinted above, is really a matter of priority: they are spending their time on other things that seem more important.

So the way to get marketers to do new things is to convince them they are worth the time.  That is, you must convince them the new things are more important than their current priorities.  Alternately, you can make the new thing so easy that it doesn’t need any time at all. The ABM vendors I discussed this with – all highly successful marketers – were doing both of these already, although they were polite enough not to roll their eye and say “duh” when I brought it up.

How do you convince marketers (or any other buyers) that something they already know is important is more important than whatever they’re doing now? I’d argue this isn’t likely to be a rational choice: MAYBE you can find some fabulously compelling proof of value, but the marketers will probably have seen those arguments already and not been convinced. More likely, you'll need to rely on emotion.  This means getting marketers excited about doing something (that’s where the “business porn” comes in) or scared about the consequences of not doing it (see the CEB Challenger Sales model,  for example). In short, it’s about appealing to basic instincts – what Seth Godin calls the lizard brain – which will ultimately dictate to the rational mind.

What about the other path I mentioned around the time barrier, showing that the new idea takes so little time that it doesn’t require giving up any current priorities? That’s a more rational argument, since you have to convince the buyer that it’s true.  But everything new will take up at least some time and money, so there’s still some need to get the buyer excited enough to make the extra effort. This brings us back to the lizard.

I’m not saying all marketing should be emotional.  Powerful as they are, emotions can only tip the balance if the rational choice is close. And I’m talking about the specific situation of getting people to adopt something new, which is quite different from, say, selling an existing solution against a similar competitor. But I spend a lot of time talking with vendors who are selling new types of solutions and talking with marketers who would benefit from those solutions. Both the vendors and I often forget that time, not budget, skills or value, is the real barrier to adoption and that emotions are the key to unlocking more time. So emotions must be a big part of our marketing if we, and the marketers we're trying to serve, are ultimately going to succeed.

Teradata Adds a Data Management Platform To Its Marketing Cloud...Who Will Be Next?

Teradata on Tuesday announced it is adding a data management platform (DMP) to its marketing cloud through the acquisition of Netherlands-based FLXone.  This is interesting on several levels, including:

- It makes Teradata the third of the big marketing cloud vendors to add a DMP, joining Oracle DMP (BlueKai) and Adobe Audience Manager. I already expected the other cloud vendors to do this eventually; now I expect that will happen even sooner. I’m looking at you, Salesforce.com.

- Unlike Oracle and Adobe, Teradata has stated (in a briefing about the announcement) that it intends to use the DMP as the primary data store for all components of its suite. I see this as a huge difference from the other vendors, who maintain separate databases for each of their suite components and integrate them largely by swapping audience files with a few data elements on specified customers. (In fact, Adobe just last week briefed analysts on a new batch integration that pushes Campaign data into Audience Manager to build display advertising lookalike audiences. The process takes 24 hours.)

Of course, we’ll see what Teradata actually delivers in this regard.  It's also important to recognize that performance needs will almost surely require intermediate layers between the DMP's primary data store and the actual execution systems. This means the distinction between a single database and multiple databases isn’t as clear as I may be seeming to suggest. But I still think it’s an important difference in mindset.  In case it isn’t obvious, I think real integration does ultimately require running all systems on the same primary database.

- It is still more evidence of the merger between ad tech and martech. I know I wrote last week that this is old news, but there’s still plenty of work to be done to make it a reality. One consequence of "madtech" is complete solutions are even larger than before, making them even harder for non-giant firms to produce. That’s the primary lesson I took away from last week’s news that StrongView had been merged into Selligent: although StrongView’s vision of omni-channel “contextual marketing” made tons of sense, they didn’t have the resources to make it happen. (See J-P De Clerck's excellent piece for in-depth analysis of the StrongView/Selligent deal.)  I’m not sure the combined Selligent/StrongView is big enough either, or that Sellingent owner HGGC will make the other investments needed to fill all the gaps.

To be clear: I'm not saying small martech/adtech/madtech firms can't do well.  I think they can plug into a larger architecture that sits on top of a customer data platform and perhaps a shared decision platform. But I very much doubt that a mid-size software firm can build or buy a complete solution of its own.  If you're wondering just who I have in mind...well, Mom always told me that if I couldn’t say something nice, I shouldn’t say anything at all.  So I won’t name names.

Thursday, October 15, 2015

EverString Takes Another $65 Million and (More Important) Launches Predictive Ad Targeting Solution

EverString announced a $65 million funding round and new ad targeting product on Tuesday. (It also released a new survey on predictive marketing which is probably interesting, but I just can't face after last weekend’s data binge.)

The new funding is certainly impressive, although the record for a B2B predictive marketing vendor is apparently InsideSales’ $100 million Series C in April 2014.  It confirms that EverString has become a leader in the field despite its relatively late entry.

But the new product is what’s really intriguing. Integration between marketing and advertising technologies has now gone from astute prediction to overused cliché, so nobody gets credit for creating another example. But the new EverString product isn’t the usual sharing of a prospect list with an ad platform, as in display retargeting, Facebook Custom Audiences, or LinkedIn Lead Accelerator. Rather, it finds prospects who are not yet on the marketer’s own list by scanning ad exchanges for promising individuals. More precisely, it puts a tag on the client's Web site to capture visitor behavior, combines this with the client's CRM data and EverString's own data, and then builds a predictive model to find prospects who are similar to the most engaged current customers.  This is a form of lookalike modeling -- something that was separately mentioned to me twice this week (both times by big marketing cloud vendors), earning it the coveted Use Case of the Week Award.

Once the prospects are ranked, EverString lets users define the number of new prospects they want and set up real time bidding campaigns with the usual bells and whistles including total and daily budgets and frequency caps per individual.  EverString doesn’t identify the prospects by name, but it does figure out their employer and track their behaviors over time. If this all rings a bell, you’re on the right track: yes, EverString has created its very own combined Data Management Platform / Demand Side Platform and is using it build and target audience profiles.

In some ways, this isn’t such a huge leap: EverString and several other predictive marketing vendors have long assembled large databases of company and/or individual profiles. These were typically sourced from public information such as Web sites, job postings, and social media. Some vendors also added intent data based on visits to a network of publisher Web sites, but those networks capture a small share of total Web activity. Building a true DMP/DSP with access to the full range of ad exchange traffic is a major step beyond previous efforts. It puts EverString in competition with new sets of players, including the big marketing clouds, several of which have their own DMPs; the big data compilers; and ad targeting giants such LinkedIn, Google, and Facebook. Of course, the most direct competitors would be account based marketing vendors including Demandbase, Terminus, Azalead, Engagio, and Vendemore. While we’re at it, we could throw in the mix other DMP/DSPs such as RocketFuel, Turn, and IgnitionOne.

At this point, your inner business strategist may be wondering if EverString has bitten off more than it can chew or committed the cardinal sin of losing focus. That may turn out to be the case, but the company does have an internal logic guiding its decisions. Specifically, it sees itself as leveraging its core competency in B2B prospect modeling, by using the same models for multiple tasks including lead scoring, new prospect identification, and, now, ad targeting. Moreover, it sees these applications reinforcing each other by sharing the data they create: for example, the ad targeting becomes more effective when it can use information that lead scoring has gathered about who ultimately becomes a customer.

From a more mundane perspective, limiting its focus to B2B prospect management lets EverString concentrate its own marketing and sales efforts on a specific set of buyers, even as it slowly expands the range of problems it can help those buyers to solve. So there is considerably more going on here than a hammer looking for something new to nail.

Speaking of unrelated topics*, the EverString funding follows quickly on the heels of another large investment  $58 million – in automated testing and personalization vendor Optimizely, which itself followed Oracle’s acquisition of Optimizely competitor Maxymiser. I’ve never thought of predictive modeling and testing as having much to do with each other, although both do use advanced analytics. But now that they’re both in the news at the same time, I’m wondering if there might be some deeper connection. After all, both are concerned with predicting behavior and, ultimately, with choosing the right treatment for each individual. This suggests that cross-pollination could result in a useful hybrid – perhaps testing techniques could help evolve campaign structures that use predictive modeling to select messages at each step. It’s a half-baked notion but does address automated campaign design, which I see as the next grand challenge for the combined martech/adtech (=madtech) industry. On a less exalted level, I suspect that automated testing and predictive modeling can be combined to give better results in their current applications than either by itself. So I’ll be keeping an eye out for that type of integration. Let me know if you spot any.

*lamest transition ever

Tuesday, October 06, 2015

Marketers Are Struggling to Keep Up With Customer Expectations: Here's Proof

How pitiful is this: My wife left me alone all last weekend and the most mischief I could get into was looking for research about cross-channel customer views. The only defense I can make is I did promise a client a paper on the topic, which I finished Sunday night. But then I decided it was way too wonky and wrote a new, data-free version that people might actually read.

But you, Dear Reader, get the benefit of my crazy little binge. Here’s a fact-filled blog post that uses some my carefully assembled information. (Yes, there was actually much more. I’m so ashamed.) 

Customer Expectations are Rising

Let's start with a truth universally acknowledged – that customers have rising expectations for personalized treatment. Unlike Jane Austen, I have facts for my assertion: e-tailing group's 7th Annual Consumer Personalization Survey found that 52% of consumers believe most online retailers can recognize them as the same person across devices and personalize their shopping experience accordingly. An even higher proportion (60%) want their past behaviors used to expedite the shopping experience, and one-third (37%) are frustrated when companies don’t take that data into account.

Switching to customer service, Microsoft’s 2015 Global State of Multichannel Customer Service Report  found that 68% of U.S. consumers had stopped doing business with a brand due to a poor customer service experience and 56% have higher expectations for customer service than a year ago. So, yes, customer expectations are rising and failing to meet them has a price.

Marketers Know They Need Data

Marketers certainly see this as well. In a Harris Poll conducted for Lithium Technologies, 82% of 300 executives agreed that customer expectations have gotten higher in the past three years.  Focusing more specifically on data, Experian's 2015 Data Quality Benchmark Report, which had 1200 respondents, found that 99% agreed some type of customer data is essential for marketing success. Marketers are backing those opinions with money: when Winterberry Group asked a select set of senior marketers what was driving their investments in data-driven marketing and advertising, the most commonly cited reason was the need to deliver more relevant communications and be more customer-centric. .

Few Have the Data They Need

But marketers also recognize that they have a long way to go. In Experian’s 2015 Digital Marketer study, 89% of marketers reported at least one challenge with creating a complete customer view.

Econsultancy’s 2015 The Multichannel Reality study for Adobe found that just 29% had succeeded in creating such a view, 15% could access the complete view in their campaign manager, 14% integrate all campaigns across all channels, and 8% were able to adapt the customer experience based on context in real time.  In other words, the complete view is just the beginning.  In other words, marketers are nowhere near as good at personalizing experiences as consumers think.

Real-Time Isn't a Luxury

Given the challenges in building any complete view, is real-time experience coordination too much to ask? Customers don’t think so; in fact, as we've already seen, they assume it’s already happening. Marketers, of course, are more aware of the challenges, but they too see it as the goal. In a survey of their own clients, marketing data analysis and campaign software vendor Apteco Ltd found that 12% of respondents were already using real-time data, 31% were sure they needed it and 37% felt it might be useful. Just 17% felt daily updates were adequate.

Real-Time Must Also Be Cross-Channel

It’s important to not to confuse real-time personalization with tracking customers across channels or even identifying customers at all.  In a survey by personalization vendor Evergage, respondents who were already doing real-time personalization were most often basing it on immediately observable, potentially anonymous data including type of content viewed, location, time on site, and navigation behavior.  Yet the marketers in that same study gave the highest importance ratings to identity-based information including customer value, buying/shopping patterns, and buyer persona. It’s clear that marketers recognize the need for a complete customer view even if they haven't built one.


What are we to make of all this, other than the fact that I need to get out more?  I'd summarize this in three points:

- customer expectations are truly rising and you'll be penalized if you don't meet them
- marketers know that meeting expectations requires a complete customer view but few have built one
- the complete view has to be part of a real-time integrated, real-time to deliver the necessary results

None of this should be news to anyone. But perhaps this data will help build your business case for investments to solve the problem.  If so, my lost weekend will not have been in vain.

Tuesday, September 29, 2015

How Many Ads Do You See Each Day? Fewer Than It Seems (I Think)

My cliché detector starts chirping as soon as anyone says today’s marketers face more competition than ever before. Sepia-toned glasses notwithstanding, marketers have had competitors at least since the railroads (or maybe canals) made it practical for customers to shop outside the local village – and for even longer if you lived in a city. So competition has been tough for as long as anyone now living can remember.

I thought I’d found a legitimate exception when writing a recent paper for QuickPivot (available here) about the continued value of direct mail marketing. My core argument was that direct mail is needed to cut through the clutter created by the increase in advertising messages. Surely it’s self-evident that people get more advertising today than ever before, right?

Well, I certainly thought so when I wrote the paper. But I recently searched for some validation of the claim about more clutter.  Turns out that shockingly little research has been published on the question of how many ads people actually receive. The most authoritative-looking study I found was from Media Dynamics in 2014, which found almost no change since 1945 in ads people were exposed to, despite a near-doubling in the time spent with media (defined as TV, radio, Internet, newspapers and magazines).

Really? Just 362 ads per day? That certainly seemed low to me, even after recognizing that the study reports on paid advertising, as opposed to the brand logos you see on everything from football stadiums to a biker’s tattoo. The 362 is nowhere near the widely cited figure of 5,000 per day, although the origins of that are mysterious at best.

I looked further but couldn’t find any better data. Now I was really frustrated and pondered what it would take to make a crude estimate of my own as a simple sanity check. This ultimately led to the bright idea of using total ad spending and cost per thousand impressions to calculate total ad impressions per year. Once you have that number, it can easily be converted to ad impressions per person per day. What I like about this method is avoids any need to estimate how many ads an individual sees per minute of media time or how many ads are theoretically available to be viewed. On the other hand, it assumes that advertisers get exactly the impressions they pay for, neither more nor less.  This is certainly not true in an absolute sense, but I'm willing to trust that the correlation between actual and purchased impressions is close enough to give an answer that's approximately correct.

The table below shows my results. Figures for advertising are easy: eMarketer publishes them all the time.  CPMs are harder to find.  I ended up using figures from sources including INFOACRS (which itself quotes eMarketer, although from 2008) and MonetizePros. There’s a bit of body English in there as well.  I do think the results are either reasonable or low – another widely quoted authority, Augustine Fou, shows generally higher CPM figures than the ones I used, which would result in estimating even fewer impressions from the same spending.

Bottom line, my numbers show 264 impressions per person per day.  That's a little lower than Media Dynamics, but in the same ballpark. Interesting.

I will admit that I’m inordinately pleased with this methodology, which I suspect is roughly correct despite many flaws in the details. One virtue is it sheds some light on the original question of whether people are seeing more ads: since we know that total media spending is rising slightly and the mix is shifting towards the low-CPM digital channel, the number of impressions is almost certainly on the rise. You'd have to adjust any trend calculations for changes in population and in channel CPMs to know for sure. 

On the other hand, the shifts are fairly gradual so it's probably wrong to claim that we're facing a sudden flood of increased advertising.  I don't have the necessary data readily available to do more detailed calculations, but if anyone out there does have the data and the time to do the math, I’d love to see your results.

(And if you're wondering: there will be about 80 billion pieces of advertising mail delivered this year, which comes to about 0.7 per person per day.)

Thursday, September 10, 2015

Data Plus MarTech: HubSpot and Demandbase Join the Race

There were two industry announcements this week that were unexpectedly related. The first was HubSpot’s announcement yesterday that its CRM offerings would now include access to a 19 million account prospecting database. The second was Demandbase’s acquisition of data-as-a-service vendor WhoToo, which offers its own set of 250 million profiles relating to 70 million business professionals.

The WhoToo acquisition marks a big step in the continued evolution of Demandbase, since it's a change from targeting companies to targeting individuals (although DemandBase still won’t sell you their names). More precisely, WhoToo aggregates audience data from multiple sources and makes it available for selections based on company and individual attributes. The company does know the identity of some individuals and will use these to target email and Web advertising to names you provide. It will also let you market to audiences in those channels without providing their names. This is a nice extension of Demandbase’s existing account-based marketing capabilities. What makes WhoToo really special is it has the technology to access its data with the split-second speed needed to purchase display and mobile ads in real time.

The addition of individual-level targeting puts Demandbase on a more even plane with LinkedIn, which of course already sells advertising to its own huge database of more than 350 million profiles. The WhoToo deal won’t fully close that gap, but it does help to keep Demandbase competitive. (I’m sure Demandbase would argue it has its own advantages over LinkedIn.)

In this context, the HubSpot announcement is interesting mostly because it too recognizes the importance of giving marketers audience lists without acquiring the names for themselves. You could argue this makes HubSpot a player in the super-hot Account Based Marketing category, although they didn't use the term.  If they are, it's ABM-lite, in the sense that HubSpot will give CRM users basic profile information, usually including a phone number, but doesn't offer contact names or email addresses. It also pulls recent news stories.  This is pretty consistent with HubSpot's historic aversion to unsolicited outbound contacts.  The company does approach the line by giving enterprise users an option to find other people in their company who have a contact at target accounts and ask for a warm introduction.

On the other hand, HubSpot also announced integration with LinkedIn for paid ad campaigns and said a Google AdWords integration is in beta, which are definitely in outbound territory. Naturally, HubSpot says its LinkedIn and Google campaigns will be giving potential buyers information they want, so they are not at all like that bad old interruptive advertising that HubSpot has always opposed. No, not one bit.

Anyway, the point here is that both HubSpot and Demandbase are adding data to their marketing technology, something we’ve seen in other deals like Oracle buying Datalogix. There are still plenty of stand-alone data vendors, especially when it comes to B2B prospecting lists. And there are plenty of vendors who combine prospect data with predictive – including LinkedIn itself since its recent FlipTop acquisition. But I think we can add “data plus tech” to the tote board of martech horse races.

Thursday, August 27, 2015

LinkedIn Buys Fliptop: Why Account Based Marketing and Predictive Analytics Are a Natural Fit

Predictive analytics vendor Fliptop today announced its acquisition by B2B social network LinkedIn.  It's an interesting piece of news but I'm personally disappointed at the timing because I have been planning all week to write a post about the relationship between predictive analytics and account-based marketing (ABM).  I would have looked so much more prescient had they announced the acquisition after I had published this post!

The original inspiration for the planned post was a set of three back-to-back conversations I had last Friday with one ABM vendor and two predictive analytics companies (none of which were Fliptop or LinkedIn).  The juxtaposition highlighted just how much predictive and ABM complement each other.  In fact, the relationship is so obvious that it almost seems unnecessary to lay it out: predictive vendors help marketers find accounts to target; ABM helps marketers reach target accounts.  You can safely assume that both sets of vendors have noticed the relationship and that many are working to combine the two techniques.  The Fliptop/LinkedIn deal is just more evidence of the connection.

To move past the very obvious, ABM vendors – whose basic business is selling ads targeted to specific companies – could also use predictive analytics to refine their ad targeting.  This could mean selecting the best people to reach within targeted accounts or selecting the most effective ad placements to reach those accounts.  This requires integration of predictive analytics within the ABM product, not just using predictive before ABM begins.  I expect LinkedIn will use Fliptop's capabilities in these ways among others.

But, getting back to last week's conversations, what really struck me was a less obvious connection of ABM and predictive to content.  Two of the vendors described using their systems to select which content to send to specific accounts or individuals.  These selections are based on previous behavior, something that certainly makes sense.  But I don't generally recall hearing ABM or predictive vendors discussing as one of their applications.  It's an important idea because it promises to improve results by delivering more relevant content for the same price.  The same data gives marketers insights into broader trends in the types of content that buyers find interesting.

Content analysis requires the ABM or predictive system to be aware of the topics of the content being consumed.  This is only possible if someone specifically goes to the trouble of tagging the content and capturing the tags.  So content analysis is not quite a natural byproduct of the ABM or predictive analytics: it takes some intentional effort.  A corollary is that not all ABM and predictive systems can deliver this benefit.  So it's something to specifically ask prospective vendors about if you think you'll want it.

To put things in a still broader perspective, targeting content with ABM and predictive systems is part of a broader trend of using advanced technology to help marketers create, manage, and optimize content.  This is something that vendors like Captora, Persado, and Olapic do in terms of content creation, and Jivox, OneSpot, Triblio, and BloomReach do in terms of personalized content creation.  I've been looking at a lot of those systems recently although I haven't written much about them here.  New targeting technologies create unprecedented demands for more content, which only new content technologies can meet.  So you can expect to hear more about technology-based content creation, whether I write about it or not.

Friday, August 21, 2015

Landscape of MarTech Vendor Directories

I'm making a presentation on marketing technology selection at B2BLeadsCon in New York next week, and had thought to start with the usual Oh-My-God-There-Are-So-Many-Vendors slide to get everybody's attention.  This would ordinarily be Scott Brinker's popular Chief MarTech Landscape but I've recently seen so many variations on the theme that I put together a composite slide instead.  This includes Scott's slide plus versions from Luma Partners, Gartner, MarTech Advisor, Terminus/FlipMyFunnel, and Growthverse.

I considered labeling this a "landscape of landscapes" but quickly realized that (a) it's not all that witty and (b) six vendors isn't enough.  But on further reflection, I recognized that these landscapes are really a type of directory that helps marketers find available products.  This led me to consider other types of online directories, of which there are many.  So I did end up producing a landscape that still isn't as crowded as Scott's but does show the number of information sources available.

As you see, this contains four sets of products: the original six landscapes, divided between the static images and the two interactive options (both very cool).  In addition, there are two directories with analyst ratings, from Gleanster and TopAlternatives.  But the biggest category is the community review sites, of which the best known among marketers are probably G2 Crowd, TrustRadius, and Software Advice.  Because the purpose here is to list tools that help marketers find systems to purchase, I didn't extend the landscape to business directories like Crunchbase, VentureBeat's VB Profiles and Owler.

I did look at every vendor shown in the graphic and can affirm that each includes at least some marketing systems.  There are some interesting differences in approach but, like any good landscape creator, I'll simply give you a set of logos and let you research from there. Again following the tradition of landscape publishers, I make no claims about the completeness of my list or the quality of any of the companies listed.  But I will make your life a bit easier by listing all the links below.  Enjoy!

Chief MarTech
G2 Crowd
Luma Partners 
MarTech Advisor
IT Central Station
Social Compare
Software Advice
Software Insider (formerly FindTheBest)