About eighteen months ago I started presenting a scenario of a woman named Jane riding in a self-driving car, unaware that her smart devices were debating whether to stop for gas and let her buy a donut. The point of the scenario was that future marketing would be focused on convincing consumers to trust the marketer’s system to make day-to-day purchasing decisions. This is a huge change from marketing today, which aims mainly to sell individual products. In the future, those product decisions will be handled by algorithms that consumers cannot understand in detail. So consumers’ only real choices will be which systems to trust. We can expect the world to divide itself into tribes of consumers who rely on companies like Amazon, Apple, Google, or Facebook and who ultimately end up making similar purchases to everyone else in their tribe.
The presentation has been quite popular – especially the part about the donut. So far the world is tracking my predictions quite closely. To take one example, the script says that wireless connections to automobiles were banned after "the Minneapolis Incident of 2018". Details aren’t specified but presumably the Incident was a cyberattack that took over cars remotely. Subsequent reports of remote Jeep hacking hacking fit the scenario almost exactly and the recent take-down of the DYN DNS server by a botnet of nanny cams and smart printers was an even more prominent illustration of the danger. The resulting, and long overdue, concern about security on Internet of Things devices is just what I predicted from Minneapolis Incident.
Fond as I am of that scenario, enough has happened to justify a new one. Two particular milestones were last summer’s mass adoption of augmented reality in the form of Pokémon Go and this autumn’s sudden awareness of reality bubbles created by social media and fake news.
The new scenario describes another woman, Sue, walking down Michigan Avenue in Chicago. She’s wearing augmented reality equipment – let’s say from RoseColoredGlasses.Me, a real Web site* – that presents shows her preferred reality: one with trash removed from the street and weather changed from cloudy to sunshine. She’s also receiving her preferred stream of news (the stock market is up and the Cubs won third straight World Series). Now she gets a message that her husband just sent flowers to her office. She checks her hair in the virtual mirror – she looks marvelous, as always – and walks into a store to find her favorite brand of shoes are on sale. Et cetera.
There’s a lot going on here. We have visual alterations (invisible trash and shining sun), facts that may or may not be true (stock market and baseball scores), events with uncertain causes (did her husband send those flowers or did his computer agent?), possible self-delusion (her hair might not look so great), and commercial machinations (is that really a sale price for those shoes?). It's complicated but the net result is that Sue lives in a much nicer world than the real one. Many people would gladly pay for a similar experience. It’s the voluntary nature of this purchase that makes RoseColoredGlasses.Me nearly inevitable: there will definitely be a market. Let’s call it “personal reality”.
We have to work out some safeguards so Sue doesn’t trip over a pile of invisible trash or get run over by a truck she has chosen not to see. Those are easy to imagine. Maybe she gets BubbleBurstTM reality alerts that issue warnings when necessary. Or, less jarringly, the system might substitute things like flower beds for trash piles. Maybe the street traffic is replaced by herds of brightly colored unicorns.
If we really want things to get interesting, we can have Sue meet a friend. Is her friend experiencing the same weather, same baseball season, same unicorns? If she isn’t, how can they effectively communicate? Maybe they can switch views, perhaps as easily as trading glasses: literally seeing the world through someone else’s eyes. That could be quite a shock. Maybe Sue’s friend is the fearful type and has set her glasses to show every possible threat; not only are the trash piles highlighted but strangers look frightening and every product has a consumer warning label attached. A less disruptive approach could be some external signifier to show her friend’s current state: perhaps her glasses are tinted gray, not rose colored, or Sue sees worried-face emoticon on her forehead.
The communication problems are challenging but solvable. Still, we can expect people with similar views to gravitate towards each other. They would simply find it easier and more pleasant to interact with people sharing their views. Of course, this type of sorting happens already. That’s what makes the RoseColoredGlasses.Me scenario so intriguing: it describes highly-feasible technical developments that are entirely compatible with larger social trends and, perhaps, human nature itself. Many forces push in this direction and there’s really nothing to stop it. I have seen the futures and they work.
Maybe you’re not quite ready to give up on the notion of objective reality. If I can screen out global warming, homeless people, immigrants, Republicans, Democrats, or anything else I dislike, then what’s to motivate me to fix the actual underlying problems? Conversely, if people’s true preferences are known do they justify real-world action: say, to remove actual homeless people from the streets if no one wants to see them. That sounds ugly but maybe a market mechanism could turn it to advantage: if enough people pay RoseColoredGlasses.Me to remove the homeless people from their virtual world, then some of that money could fund programs to help the actual homeless people. Maybe that’s still immoral when people are involved but what if we’re talking about better street signs? Replacing virtual street signs for RoseColoredGlasses.Me subscribers with actual street signs visible to everyone sounds like a winner. It would even mean less work for the computers and thus save money for RoseColoredGlasses.Me.
Another wrinkle: if the owners of RoseColoredGlasses.Me are really smart (and they will be), won't they manipulate customers’ virtual reality in ways that lead the city to put up better street signs with its own money? Maybe there will be a virtual mass movement on the topic, complete with artificial-but-realistic social media posts, videos of street demonstrations, and heart-rending reports of tragic accidents that could have been avoided with better signage. Customers would have no way to know which parts were real. Then again, they can’t tell today, either.
The border between virtual and actual reality is where the really knotty problems appear. One is the fate of people who can’t afford to pay for a private reality: as we already noted, they get stuck in a world where problems don’t get solved because richer people literally don’t see them. Again, this isn’t so different from today’s world, so it may not raise any new questions (although it does make the old questions more urgent). Today’s world also hints at the likely resolution: people living in different realities will be physically segregated. Wealthier people will pay to have nicer environments and will exclude others who can’t afford the same level of service. They will avoid public spaces where different groups mix and will pay for physical and virtual buffers to manage any mixing that does occur.
Another problem is the cost of altering reality for paying customers. It’s probably cheap to insert better street signs. But masking the impact of global warming could get expensive. On a technical level, bigger changes require more processing power for the computer and better cocoons for the customers. To fix global warming they’d need something that changes the apparent temperature, precipitation, and eventually the shoreline and sea level. It’s possible to imagine RoseColoredGlasses.Me customers wearing portable shells that create artificial environments as they move about. But it's more efficient for the computer if people to stay inside and simulate the entire experience. Like most of the other things I’ve suggested here, this sounds stupid and crazy but, as anyone who has used a video conference room already knows, it’s also not so far from today’s reality. If you think I’m blurring the border between augmented and virtual reality, it’s not because I’m unaware of the distinction. It’s because the distinction is increasingly blurry.
I do think, though, that the increasing cost of having the computer generate greater deviations from physical reality will have an important impact on how things turn out. So let's pivot from discussing ever-greater personalization (the ultimate endpoint of which is personal reality) to discussing the role of computers in it all.
To start once more with the obvious, personal reality takes a lot of computer power. Beyond whatever hyperrealistic rendering is needed, the system needs vast artificial intelligence to present the reality each customer has specified. After all, the customer will only define a relatively small number of preferences, such as “there is no such thing as global warming”. It’s then up to the computer to create a plausible environment that matches that preference (to the degree possible, of course; some preferences may simply be illogical or self-contradictory). The computer also probably has to modify news feeds, historical data, research results, and other aspects of experience to match the customer’s choice.
The computer must deliver these changes as efficiently as possible – after all, RoseColoredGlasses.Me wants to make a profit. This means the computer may make choices that minimize its cost even when those choices are not in the interest of the customer. For example, if going outdoors requires hugely expensive processing to hide the actual weather, the computer might start generating realities that lead the customer to stay inside. This could be as innocent as suggesting they order in rather than visit a restaurant (especially if delivery services allow the customer to eat the same food either way). Or it could deter travel with fake news reports about bad weather, transit breakdowns, or riots. As various kinds of telepresence technology improve, keeping customers indoors will become more possible and, from the customer’s standpoint, actually a better option.
This all happens without any malevolence by the computer or its operator. It certainly doesn't matter whether the computer is self-aware. The computer is simply be optimizing results for all concerned. In practice, each personal reality involves vastly more choices than anyone can monitor, so the computer will be left to its own devices. No one will understand what the computer is doing or why. Theoretically customers could reject the service if they find the computer is making sub-optimal choices. But if the computer is controlling their entire reality, customers will have no way to know that something better is possible. Friends or news reports who tried to warn them would literally never be heard – their words would be altered to something positive. If they persisted, they would probably be blocked out entirely.
I know this all sounds horribly dystopian. It is. My problem is there’s no clear boundary between the attractive but safe applications – many of which exist today – and the more dangerous ones that could easily follow. Many people would argue that systems like Facebook have already created a primitive personal reality that is harmful to the individuals involved (and to the larger social good, if they believe that such a thing exists). So we’ve already started down the slippery slope and there’s no obvious fence to stop our fall.
Or maybe there is. It’s possible that multiple realities will prove untenable. Maybe the computers themselves will decide it’s more efficient to maintain a single reality and force everyone to accept it (but I suspect customers would rebel). Maybe social cohesion will be so damaged that a society with multiple realities cannot function (although so far that hasn’t happened). Maybe governments will decide to require degree of shared reality and limit the amount of permitted diversity (already happens in authoritarian regimes but not yet in Western democracies). Or maybe societies with a unified reality will be more effective and ultimately outcompete more fractured societies (possible and perhaps likely, but not right away). In short, the future is far from clear.
And what does all this mean for marketing? Maybe that’s a silly question when reality itself is at stake. But assuming that society doesn’t fall apart entirely, you’ll still need to make a living. Some less extreme version of what I’ve described will almost surely come to pass. Let's say it boils down to increasingly diverse personal realities as computers control larger portions of everyone’s experience. What would that imply?
One implication is the number of entities with direct access to any particular individual will decrease. Instead of dealing with Apple, Facebook, Google, and Amazon for different purposes, individuals will get a more coherent experience by selecting one gatekeeper for just about everything. This will give gatekeepers more complete information for each customer, which will let the gatekeepers drive better-tailored experiences. Marketing at gatekeepers will therefore focus on gathering as much information as possible, using it to understand customer preferences, and delivering experiences that match those preferences. Competition will be based on insights, scope of services, and efficient execution. The winners will be companies who can guide consumers to enjoy experiences that are cost-effective to deliver.
Gatekeeper marketers will still have to build trusted brands, but this will become less important. Different gatekeeping companies will probably align with different social groups or attitudes, so most people will have a natural fit with one gatekeeper or another. This social positioning will be even more important as gatekeepers provide an ever-broader range of services, making it harder to find specific points of differentiation. Diminished competition, ability to block messages from other gatekeepers, and the high cost of switching will mean customers tend to stick after their initial choice. People who do make a switch can expect great inconvenience as the new gatekeeper assembles information to provide tailored services. Switchers might even lose touch with old friends as they vanish from communication channels controlled by their former gatekeeper. In the RoseColoredGlasses.Me scenario, they could become literally invisible as they’re blocked from sight in friends' augmented realities.
Marketers who work outside the gatekeepers will face different challenges. Brand reputation and trust will again be less important since gatekeepers make most choices for consumers. In an ideal world the gatekeepers would constantly scan the market to find the best products for each customer. This would open every market to new suppliers, putting a premium on superior value and meeting customer needs. But in the real world, gatekeepers could easily get lazy. They'd offer less selection and favor suppliers who give the best deal to the gatekeeper itself. The risk is low, since customers will rarely be aware of alternatives the gatekeeper doesn’t present. New brands will pay a premium to hire the rare guerilla marketers who can circumvent the gatekeepers to reach new customers directly.
Jane in her self-driving car and Sue walking down Michigan Avenue are both headed in the same direction: they are delegating decisions to machines. But Jane is at an earlier stage in the journey, where she’s still working with different machines simultaneously – and therefore has to decide repeatedly which machines to trust. Paradoxically, Sue makes fewer choices even though she has more control over her ultimate experience. Marketers play important roles in both worlds but their tasks are slightly different. The best you can do is an eye out for signs that show where your business is now and where it’s headed. Then adjust your actions so you arrive safely at your final destination. .
_____________________________________________________________________
*The site's a joke. I no longer own the domain, though.
Monday, December 19, 2016
Wednesday, December 14, 2016
BlueVenn Bundles Omnichannel Journey Management, Personalization, and Single Customer View
BlueVenn has only been active in the U.S. market only since March 2016, although many U.S. marketers will recall its previous incarnation as SmartFocus.* The company offers what it calls an omnichannel marketing platform that builds a unified customer database, manages marketing campaigns, and generates personalized Web and email messages.
The unified database process, a.k.a. single customer view, has rich functionality to load data from multiple sources and do standardization, validation, enhancement, hygiene, matching, deduplication, governance and auditing. These were standard functions for traditional marketing databases, which needed them to match direct mail names and addresses, but are not always found in modern customer data platforms. BlueVenn also supports current identity linking techniques such as storing associations among cookies, email addresses, form submits, and devices. This sort of identity resolution is a batch process that runs overnight. The system can also look up information about a specific customer in real time if an ID is provided. This lets BlueVenn support real time interactions in Web and call center channels.
Users can enhance imported data by defining derived elements with functions similar to Excel formulas. These let non-technical users put data into formats they need without the help of technical staff. Derived fields can be used in queries and reports, embedded in other derived fields, and shared among users. To avoid nasty accidents, BlueVenn blocks changes in a field definition if the field is used elsewhere. Data can be read by Tableau and other third-party tools for analysis and reporting.
BlueVenn offers several options for defining customer segments, including cross tabs, geographic map overlays, and flow charts that merge and split different groups. But BlueVenn's signature selection tool has always the Venn diagram (intersecting circles). This is made possible by a columnar database engine that is extremely fast at finding records with shared data elements. Clients could also use other databases including SQL Server, Amazon Redshift (also columnar), or MongoDB, although BlueVenn says nearly all its clients use the BlueVenn engine for its combination of high speed and low cost.
Customer journeys - formerly known as campaigns - are set up by connecting icons on a flow chart. The flow can be split based on yes/no critiera, field values, query results, or random groups. Records in each branch can be sent a communication, assigned to seed lists or control groups, deduplicated, tagged, held for a wait period or until they respond, merged with other branches, or exit the flow. The “merge” feature is especially important because it allows journeys to cycle indefinitely rather than ending after a sequence of steps. Merge also simplifies journey design since paths can be reunified after a split. Even today, most campaign flow charts don’t do merges.
Tagging is also important because it lets marketers flag customers based on a combination of behaviors and data attributes. Tags can be used to control subsequent flow steps. Because tags are attached to the customer record, they can be used to coordinate journeys: one application cited by BlueVenn is to tag customers for future messages in multiple journeys and then periodically compare the tags to decide which message should actually be delivered.
Communications are handled by something called BlueRelevance. This puts a line of code on client Web sites to gather click stream data, manage first party cookies, and deliver personalized messages. The messages can include different forms of dynamic content including recommendations, coupons, and banners. In addition to Web pages, BlueVenn can send batch and triggered emails, text messages, file transfers, and direct messages in Twitter and Facebook. Next year it will add display ad audiences and Facebook Custom Audiences. The vendor is also integrating with the R statistical system for predictive models and scoring. BlueVenn has 23 API integrations with delivery systems such as specific email providers and builds new integrations as clients need them.
All BlueVenn features are delivered as part of a single package. Pricing is based on the number of sources and contacts, starting at $3,000 per month for two sources and 100,000 contacts. There is a separate fee for setting up the unified database, which can range from $50,000 to $300,000 or more depending on complexity. Clients can purchase the configured database management system if they want to run it for themselves. The company also offers a Software-as-a-Service version or hybrid system that is managed by BlueVenn on the client's own computers. BluyeVenn has about 400 total clients of which about two dozen run the latest version of its system. It sells primarily to mid-size companies, which it defines as $25 million to $1 billion in revenue.
_____________________________________________________________________________
*The original SmartFocus was purchased in 2011 by Emailvision, which changed its own name to SmartFocus in 2013 and then sold the business (technology, clients, etc.) but kept the name for itself. If you’re really into trivia, SmartFocus began life in 1995 as Brann Viper, and BlueVenn is part of Blue Group Inc. which also owns a database marketing services agency called Blue Sheep. The good news is: this won't be on the final.
The Venn in BlueVenn |
The unified database process, a.k.a. single customer view, has rich functionality to load data from multiple sources and do standardization, validation, enhancement, hygiene, matching, deduplication, governance and auditing. These were standard functions for traditional marketing databases, which needed them to match direct mail names and addresses, but are not always found in modern customer data platforms. BlueVenn also supports current identity linking techniques such as storing associations among cookies, email addresses, form submits, and devices. This sort of identity resolution is a batch process that runs overnight. The system can also look up information about a specific customer in real time if an ID is provided. This lets BlueVenn support real time interactions in Web and call center channels.
Users can enhance imported data by defining derived elements with functions similar to Excel formulas. These let non-technical users put data into formats they need without the help of technical staff. Derived fields can be used in queries and reports, embedded in other derived fields, and shared among users. To avoid nasty accidents, BlueVenn blocks changes in a field definition if the field is used elsewhere. Data can be read by Tableau and other third-party tools for analysis and reporting.
BlueVenn offers several options for defining customer segments, including cross tabs, geographic map overlays, and flow charts that merge and split different groups. But BlueVenn's signature selection tool has always the Venn diagram (intersecting circles). This is made possible by a columnar database engine that is extremely fast at finding records with shared data elements. Clients could also use other databases including SQL Server, Amazon Redshift (also columnar), or MongoDB, although BlueVenn says nearly all its clients use the BlueVenn engine for its combination of high speed and low cost.
Customer journeys - formerly known as campaigns - are set up by connecting icons on a flow chart. The flow can be split based on yes/no critiera, field values, query results, or random groups. Records in each branch can be sent a communication, assigned to seed lists or control groups, deduplicated, tagged, held for a wait period or until they respond, merged with other branches, or exit the flow. The “merge” feature is especially important because it allows journeys to cycle indefinitely rather than ending after a sequence of steps. Merge also simplifies journey design since paths can be reunified after a split. Even today, most campaign flow charts don’t do merges.
BlueVenn Journey Flow |
Tagging is also important because it lets marketers flag customers based on a combination of behaviors and data attributes. Tags can be used to control subsequent flow steps. Because tags are attached to the customer record, they can be used to coordinate journeys: one application cited by BlueVenn is to tag customers for future messages in multiple journeys and then periodically compare the tags to decide which message should actually be delivered.
Communications are handled by something called BlueRelevance. This puts a line of code on client Web sites to gather click stream data, manage first party cookies, and deliver personalized messages. The messages can include different forms of dynamic content including recommendations, coupons, and banners. In addition to Web pages, BlueVenn can send batch and triggered emails, text messages, file transfers, and direct messages in Twitter and Facebook. Next year it will add display ad audiences and Facebook Custom Audiences. The vendor is also integrating with the R statistical system for predictive models and scoring. BlueVenn has 23 API integrations with delivery systems such as specific email providers and builds new integrations as clients need them.
All BlueVenn features are delivered as part of a single package. Pricing is based on the number of sources and contacts, starting at $3,000 per month for two sources and 100,000 contacts. There is a separate fee for setting up the unified database, which can range from $50,000 to $300,000 or more depending on complexity. Clients can purchase the configured database management system if they want to run it for themselves. The company also offers a Software-as-a-Service version or hybrid system that is managed by BlueVenn on the client's own computers. BluyeVenn has about 400 total clients of which about two dozen run the latest version of its system. It sells primarily to mid-size companies, which it defines as $25 million to $1 billion in revenue.
_____________________________________________________________________________
*The original SmartFocus was purchased in 2011 by Emailvision, which changed its own name to SmartFocus in 2013 and then sold the business (technology, clients, etc.) but kept the name for itself. If you’re really into trivia, SmartFocus began life in 1995 as Brann Viper, and BlueVenn is part of Blue Group Inc. which also owns a database marketing services agency called Blue Sheep. The good news is: this won't be on the final.
Thursday, December 08, 2016
Can Customer Data Platforms Make Decisions? Discuss.
That may seem pretty abstract but bear with me because this isn’t really about definitions. It’s about what systems do and how they’re built. To clear the ground a bit, the definition of CDP, per the CDP Institute, is “a marketer-managed system that creates a persistent, unified customer database that is accessible to other systems". Other people have other definitions but they are pretty similar. You’ll note there’s nothing in that definition about doing anything with data beyond making it available. So, no, a CDP doesn’t need to have customer management features.
But there’s nothing in the definition to prohibit those features, either. So a CDP could certainly be part of a larger system, in the same way that a motor is part of a farm tractor. But most farmers would call what they’re buying a tractor, not a motor. For the same reasons, I generally don’t to refer to systems as CDPs if their primary purpose is to deliver an application, even though they may build a unified customer database to support that application.
The boundary gets a little fuzzier when the system makes that unified database available to external systems – which, you’ll recall, is part of the CDP definition. Those systems could be used as CDPs, in exactly the same way that farm tractors have “power take off” devices that use their motor to run other machinery. But unless you’re buying that tractor primarily as a power source, you’re still going to think of it as a tractor. The motor and power take off will simply be among the features you consider when making a choice.*
So much for definitions. The vastly more important question is SHOULD people buy "pure" CDPs or systems that contain a CDP plus applications. At the risk of overworking our poor little tractor, the answer is the same as the farmer’s: it depends it on how you’ll use it. If a particular system offers the only application you need, you can buy it without worrying about access by other applications. At the other extreme, if you have many external applications to connect, then it almost doesn’t matter whether the CDP has applications of its own. In between – which is where most people live – the integrated application is likely add value but you also want to with connect other systems. So, as a practical matter, we find that many buyers pick CDPs based on both integrated applications and external access. From the CDP vendor’s viewpoint, this connectivity is helpful because it makes their system more important to their clients.
The tractor analogy also helps show why data-only CDPs have been sold almost exclusively to large enterprises. Those companies have many existing systems that can all benefit from a better database. In tractor terms, they need the best motor possible for power applications and have other machines for tasks like pulling a plow. A smaller farm needs one tractor that can do many different tasks.
I may have driven the tractor metaphor into a ditch. Regardless, the important point is that a system optimized for a single task – whether it’s sharing customer data or powering farm equipment – is designed differently from a system that’s designed to do several things. I’m not at all opposed to systems that combine customer data assembly with applications. In fact, I think Journey Orchestration Engines (JOEs), which often combine customer data with journey orchestration, make a huge amount of sense. But most JOE databases are not designed with external access in mind. A JOE database designed for open access would be even better -- although maybe we shouldn't call it a CDP.
To put this in my more usual terms of Data, Decision, and Delivery layers: a CDP creates a unified Data layer, while most JOEs create a unified Data and Decision layer. There’s a clear benefit to unifying decisions when our goal is a consistent customer treatment across all delivery systems. What’s less clear is the benefit of having the same system combine the data and decision functions. The combination avoids integration issues. But it also means the buyer must use both components, even though she might prefer a different tool for one or the other.
Remember that there’s nothing inherent in JOEs that requires them to provide both layers. A JOE could have only the decision function and connect to a separate CDP. The fact that most JOEs create a database is just the matter of necessity: most companies don’t have a database in place, so the JOE must build one in order to do the fun stuff (orchestration). Many other tools, such as B2B predictive analytics and customer success systems, create their own database for exactly the same reason. In fact, I originally classified those systems as CDPs although I’ve now narrowed my definition since the database is not their focus.
So I hope this clarifies things: CDPs can have decision functions but if decisions are the main purpose of the system, it’s confusing to call it a CDP. And CDPs are certainly not required to have decision functions, although many do include them to give buyers a quick return on their investment. If that seems like waffling, then so be it: what matters is helping marketers to understand what they’re getting so they get what they really need.
_________________________________________________________________
*I’ll guess few of my readers are very familiar with farm tractors. Maybe the more modern analogy is powering apps with your smartphone. For the record, I did work on a farm when I was a lad, and drove a tractor.
Wednesday, November 30, 2016
3 Insights to Help Build Your Unified Customer Database
The Customer Data Platform Institute (which is run by Raab Associates) on Monday published results of a survey we conducted in cooperation with MarTech Advisor. The goal was to assess the current state of customer data unification and, more important, to start exploring management practices that help companies create the rare-but-coveted single customer view.
You can download the full survey report here (registration required) and I’ve already written some analysis on the Institute blog . But it’s a rich set of data so this post will highlight some other helpful insights.
1. All central customer databases are not equal.
We asked several different questions whose answers depended in part on whether the respondent had a unified customer database. The percentage who said they did ranged from 14% to 72%:
I should stress that these answers all came from the same people and we only analyzed responses with answers to all questions. And, although we didn’t test their mental states, I doubt a significant fraction had multiple personality disorders. One lesson is that the exact question really matters, which makes comparing answers across different surveys quite unreliable. But the more interesting insight is there are real differences in the degree of integration involved with sharing customer data.
You’ll notice the question with the fewest positive answers – “many systems connected through a shared customer database” describes a high level of integration. It’s not just that data is loaded into a central database, but that systems are actually connected to a shared central database. Since context clearly matters, here is the actual question and other available answers:
The other questions set a lower bar, referring to a “unified customer database” (33%), “central database (42%) and "central customer database” (57%). Those answers could include systems where data is copied into a central database but then used only for analysis. That is, they don’t imply connections or sharing with operational customer-facing systems. They also could describe situations where one primary system has all the data and thus functions as a central or unified database.
The 72% question covered an even broader set of possibilities because it only described how customer data is combined, not where those combinations take place. That is, the combinations could be happening in operational systems that share data directly: no central database is required or even implied. Here are the exact options:
The same range of possibilities is reflected in answers about how people would use a single customer view. The most common answers are personalization and customer insights. Those require little or no integration between operational systems and the central database, since personalization can easily be supported by periodically synchronizing a few data elements. It’s telling that consistent treatments ranks almost dead last – even though consistent experiences are often cited as the reason a central database is urgently required.
This array of options to describe the central customer database suggests a maturity model or deployment sequence. It would start with limited unification by sharing data directly between systems (the most common approach, based on the stack question shown above), progress to a central database that assembles the data but doesn’t share it with the operational systems, and ultimately achieve the perfect bliss of unity, which, in martech terms, means all operational systems are using the shared database to execute customer interactions. Purists might be troubled by these shades of gray, but they offer a practical path to salvation. In any case, it’s certainly important to keep these degrees in mind and clarify what anyone means when they talk about shared customer data or that single customer view.
2. You must have faith.
Hmm, a religious theme seems to be emerging. I hadn’t intended that but maybe it’s appropriate. In any event, I’ve long argued that the real reason technologies like marketing automation and predictive modeling don’t get adopted more quickly are not the practical obstacles or lack of proven value, but lack of belief among managers that they are worthwhile. This doesn’t show up in surveys, which usually show things like budget, organization, and technology as the main obstacles. My logic has been that those are basically excuses: people would find the resources and overcome the organizational barriers if they felt the project were important enough. So citing budgets and organizational constraints really means they see better uses for their limited resources.
The survey data supports my view nicely. Looking at everyone’s answers to a question about obstacles, the answers are rather muddled: budget is indeed the most commonly cited obstacle (41%), followed closely by the technical barrier of extracting data from source systems (39%). Then there’s a virtual tie among organizational roadblocks (31%), other priorities in IT (29%), other priorities in marketing (29%) and systems can’t use (29%). Not much of a pattern there.
But when you divide the respondents based on whether they think single customer view is important for over-all marketing success, a stark division emerges. Budget and organization are the top two obstacles for people who don’t think the unified view is needed, while having systems that can extract and use the data are top two obstacles for people who do think it’s necessary for success. In other words, the people committed to unified data are focused on practical obstacles, while those who don’t are using the same objections they apply to everything else.
Not surprisingly, people who classify SCV as extremely important are more likely to actually have a database in place than people who consider it just very important, who in turn have more databases than people who consider it even less important or not important at all. (In case you're wondering, each group accounts for roughly one-third of the total.)
The same split applies to what people would consider helpful in build in building a single customer view: people who consider the single view important are most interested in best practices, case studies, and planning assumptions – i.e., building a business case. Those who think it’s unimportant ask for product information, vendor lists, and pricing. I find this particular split a bit puzzling, since you’d think people who don’t much care about a unified database would be least interested in the details of building one. A cynic might say they’re looking for excuses (cost is too high) but maybe they’re actually trying to find an easy solution so they can avoid a major investment.
Jumping ahead just a bit, the idea that SCV doubters are less engaged than believers also shows up in at the management tools they use. People who rated SCV as extremely important were much more likely to use all the tools we asked about. Interestingly, the biggest gap is in use of value metrics. This could be read to mean that people become believers after they measure the value of a central database, or that people set up measurements after they decide they need to prove their beliefs. My theology is pretty rusty but surely there’s a standard debate about whether faith or action comes first.
Regardless of the exact reasons for the different attitudes, the fundamental insight here is that people who consider a single view important act quite differently from people who don’t. This means that if you’re trying to sell a customer database, either in your own company or as a vendor, you need to understand who falls into which category and address them in appropriate terms. And I guess a little prayer never hurt.
3. Tools matter.
We’ve already seen that believers have more databases and have more tools, so you won’t be surprised that using more tools correlates directly with having or planning a database.
Let's introduce the tools formally. Here are the exact definitions we used and the percentage of people who said each was present in their organization:
Of course, the really interesting question isn’t which tools are most popular but which actually contribute (or at least correlate) with deploying a database. We looked at tool use for three groups: people with a database, people planning a database, and people with no such plans.
Over all, results for the different tools were pretty similar: people who used each tool were much more likely to have a database and somewhat more likely to plan to build one. The pattern is a bit jumbled for Centers of Excellence and technology standards, but the numbers are small so the differences may not be significant. But it's still worth noting that Centers of Excellence are really tools to diffuse expertise in using marketing technology and don’t have too much to do with actually creating a customer database.
If you’re looking for a dog that didn’t bark, you might have expected companies using agile to be exceptionally likely to either have a database or be planning one. All quiet on that front: the numbers for agile look like numbers for long term planning and value metrics, adjusting for relative popularity. So agile is helpful but not a magic bullet.
What have we learned here?
Clearly, we've learned that management tools are important and that long term planning in particular both the most common and the best predictor of success.
We also found that tools aren’t enough: managers need to be convinced that a unified customer view is important before they’ll invest in a database or tools to build it.
And, going back to the beginning, we saw that there are many forms of unified data, varying in how data is shared, where it’s stored, how it’s unified, and how it’s used. While it’s easy enough to assume that tight, real-time integration is needed to provide unified omni-channel customer experiences, many marketers would be satisfied with much less. I’d personally hope to see more but, as every good missionary knows, people move towards enlightenment in many small steps.
You can download the full survey report here (registration required) and I’ve already written some analysis on the Institute blog . But it’s a rich set of data so this post will highlight some other helpful insights.
1. All central customer databases are not equal.
We asked several different questions whose answers depended in part on whether the respondent had a unified customer database. The percentage who said they did ranged from 14% to 72%:
I should stress that these answers all came from the same people and we only analyzed responses with answers to all questions. And, although we didn’t test their mental states, I doubt a significant fraction had multiple personality disorders. One lesson is that the exact question really matters, which makes comparing answers across different surveys quite unreliable. But the more interesting insight is there are real differences in the degree of integration involved with sharing customer data.
You’ll notice the question with the fewest positive answers – “many systems connected through a shared customer database” describes a high level of integration. It’s not just that data is loaded into a central database, but that systems are actually connected to a shared central database. Since context clearly matters, here is the actual question and other available answers:
The other questions set a lower bar, referring to a “unified customer database” (33%), “central database (42%) and "central customer database” (57%). Those answers could include systems where data is copied into a central database but then used only for analysis. That is, they don’t imply connections or sharing with operational customer-facing systems. They also could describe situations where one primary system has all the data and thus functions as a central or unified database.
The 72% question covered an even broader set of possibilities because it only described how customer data is combined, not where those combinations take place. That is, the combinations could be happening in operational systems that share data directly: no central database is required or even implied. Here are the exact options:
The same range of possibilities is reflected in answers about how people would use a single customer view. The most common answers are personalization and customer insights. Those require little or no integration between operational systems and the central database, since personalization can easily be supported by periodically synchronizing a few data elements. It’s telling that consistent treatments ranks almost dead last – even though consistent experiences are often cited as the reason a central database is urgently required.
This array of options to describe the central customer database suggests a maturity model or deployment sequence. It would start with limited unification by sharing data directly between systems (the most common approach, based on the stack question shown above), progress to a central database that assembles the data but doesn’t share it with the operational systems, and ultimately achieve the perfect bliss of unity, which, in martech terms, means all operational systems are using the shared database to execute customer interactions. Purists might be troubled by these shades of gray, but they offer a practical path to salvation. In any case, it’s certainly important to keep these degrees in mind and clarify what anyone means when they talk about shared customer data or that single customer view.
2. You must have faith.
Hmm, a religious theme seems to be emerging. I hadn’t intended that but maybe it’s appropriate. In any event, I’ve long argued that the real reason technologies like marketing automation and predictive modeling don’t get adopted more quickly are not the practical obstacles or lack of proven value, but lack of belief among managers that they are worthwhile. This doesn’t show up in surveys, which usually show things like budget, organization, and technology as the main obstacles. My logic has been that those are basically excuses: people would find the resources and overcome the organizational barriers if they felt the project were important enough. So citing budgets and organizational constraints really means they see better uses for their limited resources.
The survey data supports my view nicely. Looking at everyone’s answers to a question about obstacles, the answers are rather muddled: budget is indeed the most commonly cited obstacle (41%), followed closely by the technical barrier of extracting data from source systems (39%). Then there’s a virtual tie among organizational roadblocks (31%), other priorities in IT (29%), other priorities in marketing (29%) and systems can’t use (29%). Not much of a pattern there.
But when you divide the respondents based on whether they think single customer view is important for over-all marketing success, a stark division emerges. Budget and organization are the top two obstacles for people who don’t think the unified view is needed, while having systems that can extract and use the data are top two obstacles for people who do think it’s necessary for success. In other words, the people committed to unified data are focused on practical obstacles, while those who don’t are using the same objections they apply to everything else.
Not surprisingly, people who classify SCV as extremely important are more likely to actually have a database in place than people who consider it just very important, who in turn have more databases than people who consider it even less important or not important at all. (In case you're wondering, each group accounts for roughly one-third of the total.)
The same split applies to what people would consider helpful in build in building a single customer view: people who consider the single view important are most interested in best practices, case studies, and planning assumptions – i.e., building a business case. Those who think it’s unimportant ask for product information, vendor lists, and pricing. I find this particular split a bit puzzling, since you’d think people who don’t much care about a unified database would be least interested in the details of building one. A cynic might say they’re looking for excuses (cost is too high) but maybe they’re actually trying to find an easy solution so they can avoid a major investment.
Jumping ahead just a bit, the idea that SCV doubters are less engaged than believers also shows up in at the management tools they use. People who rated SCV as extremely important were much more likely to use all the tools we asked about. Interestingly, the biggest gap is in use of value metrics. This could be read to mean that people become believers after they measure the value of a central database, or that people set up measurements after they decide they need to prove their beliefs. My theology is pretty rusty but surely there’s a standard debate about whether faith or action comes first.
Regardless of the exact reasons for the different attitudes, the fundamental insight here is that people who consider a single view important act quite differently from people who don’t. This means that if you’re trying to sell a customer database, either in your own company or as a vendor, you need to understand who falls into which category and address them in appropriate terms. And I guess a little prayer never hurt.
3. Tools matter.
We’ve already seen that believers have more databases and have more tools, so you won’t be surprised that using more tools correlates directly with having or planning a database.
Let's introduce the tools formally. Here are the exact definitions we used and the percentage of people who said each was present in their organization:
Of course, the really interesting question isn’t which tools are most popular but which actually contribute (or at least correlate) with deploying a database. We looked at tool use for three groups: people with a database, people planning a database, and people with no such plans.
Over all, results for the different tools were pretty similar: people who used each tool were much more likely to have a database and somewhat more likely to plan to build one. The pattern is a bit jumbled for Centers of Excellence and technology standards, but the numbers are small so the differences may not be significant. But it's still worth noting that Centers of Excellence are really tools to diffuse expertise in using marketing technology and don’t have too much to do with actually creating a customer database.
If you’re looking for a dog that didn’t bark, you might have expected companies using agile to be exceptionally likely to either have a database or be planning one. All quiet on that front: the numbers for agile look like numbers for long term planning and value metrics, adjusting for relative popularity. So agile is helpful but not a magic bullet.
What have we learned here?
Clearly, we've learned that management tools are important and that long term planning in particular both the most common and the best predictor of success.
We also found that tools aren’t enough: managers need to be convinced that a unified customer view is important before they’ll invest in a database or tools to build it.
And, going back to the beginning, we saw that there are many forms of unified data, varying in how data is shared, where it’s stored, how it’s unified, and how it’s used. While it’s easy enough to assume that tight, real-time integration is needed to provide unified omni-channel customer experiences, many marketers would be satisfied with much less. I’d personally hope to see more but, as every good missionary knows, people move towards enlightenment in many small steps.
Friday, November 25, 2016
Pega Customer Decision Hub Offers High-End Customer Journey Orchestration
My previous posts about Journey Orchestration Engines (JOEs) have all pointed to new products. But some older systems qualify as well. In some ways they are even more interesting because they illustrate a mature version of the concept.
The Customer Decision Hub from Pega (formerly PegaSystems) is certainly mature: the product can trace its roots back well over a decade, to a pioneering company called KiQ Limited, which was purchased in 2004 by Chordiant, which Pega purchased in 2010. Obviously the system has been updated many times since then but its core approach to optimizing real-time decisions across all channels has stayed remarkably constant. Indeed, some features the product had a decade ago are still cutting edge today – my favorite is simulation of proposed decision rules to assess their impact before deployment.
Pega positions Customer Decision Hub as part of its core platform, which supports applications for marketing, sales automation, customer service, and operations. It competes with the usual enterprise suspects: Adobe, Oracle, Salesforce.com, IBM, and SAS. Even more than those vendors, Pega focuses on selling to large companies, describing its market as primarily the Fortune 3000. So if you’re not working at one of those firms, consider the rest of this article a template for what you might look for elsewhere.
The current incarnation of Customer Decision Hub ihas six components: Predictive Analytics Director to build offline predictive models, Adaptive Decision Manager to build self-learning real-time models, Decision Strategy Manager to set rules for making decisions, Event Strategy Manager to monitor for significant events, Next Best Action Advisor to deliver decisions to customer-facing systems, and Visual Business Director for planning, simulation, visualization, and over-all management. From a journey orchestration perspective, the most interesting of these are Decision Strategy Manager and Event Strategy Manager, because they’re the pieces that select customer treatments. The other components provide inputs (Predictive Analytics Director and Adaptive Decision Manager), support execution (Next Best Action Advisor), or give management control (Visual Business Director).
Decision Strategy Manager is where the serious decision management takes place. It brings together audiences, offers, and actions. Audiences can be built using segmentation rules or selected by predictive models. Offers can include multi-step flows with interactions over time and across channels. Actions can be anything, not just marketing messages, and may include doing nothing. They are selected using arbitration rules that specify the relevance of each action to an audience, rank the action based on eligibility and prioritization, and define where the action can be delivered.
The concept of “relevance” is what qualifies Decision Hub as a JOE. It measures the value of each action against the customer’s current needs and context,. This is the functional equivalent of defining journey stages or customer states, even though Pega doesn’t map how customers move from one state to another. The interface to set up the arbitration rules is where Decision Hub’s maturity is most obvious. For example, users can build predictive model scores into decision rules and can set up a/b tests within the arbitration to compare different approaches.
Event Strategy Manager lets users define events based on data patterns, such as three dropped phone calls within a week. These events can trigger specific actions or factor into a decision strategy arbitration. It’s another way of bringing context to bear and thus of ensuring each decision is appropriate to the customer’s current journey stage. Like arbitration rules in Decision Strategy Manager, the event definitions in Event Strategy Manager can be subtle and complex. The system is also powerful in being able to connect to nearly any type of data stream, including social, mobile, and Internet of Things devices as well as traditional structured data.
I won't go into details of other Decision Hub components, but they’re equally advanced. Companies with the scale to afford the system can expect it to pay for itself: in one published study, the three-year cost was $7.7 million but incremental revenue was $362 million. Pega says few deployments cost less than $250,000 and most are over $1 million. As I say, this isn’t a system for everyone. But it does set a benchmark for other options.
The Customer Decision Hub from Pega (formerly PegaSystems) is certainly mature: the product can trace its roots back well over a decade, to a pioneering company called KiQ Limited, which was purchased in 2004 by Chordiant, which Pega purchased in 2010. Obviously the system has been updated many times since then but its core approach to optimizing real-time decisions across all channels has stayed remarkably constant. Indeed, some features the product had a decade ago are still cutting edge today – my favorite is simulation of proposed decision rules to assess their impact before deployment.
Pega positions Customer Decision Hub as part of its core platform, which supports applications for marketing, sales automation, customer service, and operations. It competes with the usual enterprise suspects: Adobe, Oracle, Salesforce.com, IBM, and SAS. Even more than those vendors, Pega focuses on selling to large companies, describing its market as primarily the Fortune 3000. So if you’re not working at one of those firms, consider the rest of this article a template for what you might look for elsewhere.
The current incarnation of Customer Decision Hub ihas six components: Predictive Analytics Director to build offline predictive models, Adaptive Decision Manager to build self-learning real-time models, Decision Strategy Manager to set rules for making decisions, Event Strategy Manager to monitor for significant events, Next Best Action Advisor to deliver decisions to customer-facing systems, and Visual Business Director for planning, simulation, visualization, and over-all management. From a journey orchestration perspective, the most interesting of these are Decision Strategy Manager and Event Strategy Manager, because they’re the pieces that select customer treatments. The other components provide inputs (Predictive Analytics Director and Adaptive Decision Manager), support execution (Next Best Action Advisor), or give management control (Visual Business Director).
Decision Strategy Manager is where the serious decision management takes place. It brings together audiences, offers, and actions. Audiences can be built using segmentation rules or selected by predictive models. Offers can include multi-step flows with interactions over time and across channels. Actions can be anything, not just marketing messages, and may include doing nothing. They are selected using arbitration rules that specify the relevance of each action to an audience, rank the action based on eligibility and prioritization, and define where the action can be delivered.
The concept of “relevance” is what qualifies Decision Hub as a JOE. It measures the value of each action against the customer’s current needs and context,. This is the functional equivalent of defining journey stages or customer states, even though Pega doesn’t map how customers move from one state to another. The interface to set up the arbitration rules is where Decision Hub’s maturity is most obvious. For example, users can build predictive model scores into decision rules and can set up a/b tests within the arbitration to compare different approaches.
Event Strategy Manager lets users define events based on data patterns, such as three dropped phone calls within a week. These events can trigger specific actions or factor into a decision strategy arbitration. It’s another way of bringing context to bear and thus of ensuring each decision is appropriate to the customer’s current journey stage. Like arbitration rules in Decision Strategy Manager, the event definitions in Event Strategy Manager can be subtle and complex. The system is also powerful in being able to connect to nearly any type of data stream, including social, mobile, and Internet of Things devices as well as traditional structured data.
I won't go into details of other Decision Hub components, but they’re equally advanced. Companies with the scale to afford the system can expect it to pay for itself: in one published study, the three-year cost was $7.7 million but incremental revenue was $362 million. Pega says few deployments cost less than $250,000 and most are over $1 million. As I say, this isn’t a system for everyone. But it does set a benchmark for other options.
Monday, November 14, 2016
HubSpot Announces LinkedIn, Facebook Partnerships and Free Marketing Automation Edition at INBOUND Conference
HubSpot held its annual INBOUND conference in Boston last week. Maybe it's me, but the show seemed to lack some of its usual self-congratulatory excitement: for example, CEO Brian Halligan didn’t present the familiar company scorecard touting growth in customers and revenues. (A quick check of financial reports shows those are just fine: the company is expecting about 45% revenue increase for 2016.) Even the insights that Halligan and co-founder Dharmesh Shah presented in their keynotes seemed familiar: I'm guessing you've already heard that video, social, messaging, free trials, and chatbots will be big.
My own attention was more focused on the product announcements. The big news was a free version of HubSpot’s core marketing platform, joining free versions already available of its CRM and Sales systems. (In Hubspeak, CRM is the underlying database that tracks and manages customer interactions, while Sales is tools for salesperson productivity in email and elsewhere.) Using free versions to grow marketing automation has consistently failed in the past, probably because people attracted by a free system aren't willing to do the substantial work needed for marketing automation success. But HubSpot managers are aware of this history and seem confident they have a way to cost-effectively nurture a useful fraction of freemium users towards paid status. We'll see.
The company also announced enhancements to existing products. Many were features that already exist in other mid-tier systems, including branching visual workflows, sessions within Web analytics reports, parent/child relationships among business records, and detailed control over user permissions. As HubSpot explained it, the modest scope of these changes reflects a focus on simplifying the system rather than making it super-powerful. One good example of this attitude was a new on-site chat feature, which seems basic enough but has some serious hidden cleverness in automatically routing chat requests to the right sales person, pulling up the right CRM record for the agent, and adding the chat conversation to the customer history.
One feature that did strike me as innovative was closer to HubSpot’s roots in search marketing: a new “content strategy” tool reflecting the shift from keywords to topics as the basis of search results. HubSpot’s tool helps marketers find the best topics to try to dominate with their content. This will be very valuable for marketers unfamiliar with the new search optimization methods. Still, what you really want is a system that helps you create that content. HubSpot does seem to be working on that.
With relatively modest product news, the most interesting announcements at the conference were probably about HubSpot’s alliances. A new Facebook integration lets users create Facebook lead generation campaigns within HubSpot and posts leads from those campaigns directly to the HubSpot database. A new LinkedIn integration shows profiles from LinkedIn Sales Navigator within HubSpot CRM screens for users who have a Sales Navigator subscription. Both integrations were presented as first steps towards deeper relationships. These relationships reflect the growing prominence of HubSpot among CRM/marketing automation vendors, which gives companies like Microsoft and LinkedIn a reason to pick HubSpot as a partner. This, in turn, lets HubSpot offer features that less well-connected competitors cannot duplicate. That sets up a positive cycle of growth and expansion that is very much in HubSpot’s favor.
As an aside, the partnerships raise the question of whether Microsoft might just purchase HubSpot and use it to replace or supplement the existing Dynamics CRM products. Makes a lot of sense to me. A Facebook purchase seems unlikely but, as we also learned last week, unlikely things do sometimes happen.
My own attention was more focused on the product announcements. The big news was a free version of HubSpot’s core marketing platform, joining free versions already available of its CRM and Sales systems. (In Hubspeak, CRM is the underlying database that tracks and manages customer interactions, while Sales is tools for salesperson productivity in email and elsewhere.) Using free versions to grow marketing automation has consistently failed in the past, probably because people attracted by a free system aren't willing to do the substantial work needed for marketing automation success. But HubSpot managers are aware of this history and seem confident they have a way to cost-effectively nurture a useful fraction of freemium users towards paid status. We'll see.
The company also announced enhancements to existing products. Many were features that already exist in other mid-tier systems, including branching visual workflows, sessions within Web analytics reports, parent/child relationships among business records, and detailed control over user permissions. As HubSpot explained it, the modest scope of these changes reflects a focus on simplifying the system rather than making it super-powerful. One good example of this attitude was a new on-site chat feature, which seems basic enough but has some serious hidden cleverness in automatically routing chat requests to the right sales person, pulling up the right CRM record for the agent, and adding the chat conversation to the customer history.
One feature that did strike me as innovative was closer to HubSpot’s roots in search marketing: a new “content strategy” tool reflecting the shift from keywords to topics as the basis of search results. HubSpot’s tool helps marketers find the best topics to try to dominate with their content. This will be very valuable for marketers unfamiliar with the new search optimization methods. Still, what you really want is a system that helps you create that content. HubSpot does seem to be working on that.
With relatively modest product news, the most interesting announcements at the conference were probably about HubSpot’s alliances. A new Facebook integration lets users create Facebook lead generation campaigns within HubSpot and posts leads from those campaigns directly to the HubSpot database. A new LinkedIn integration shows profiles from LinkedIn Sales Navigator within HubSpot CRM screens for users who have a Sales Navigator subscription. Both integrations were presented as first steps towards deeper relationships. These relationships reflect the growing prominence of HubSpot among CRM/marketing automation vendors, which gives companies like Microsoft and LinkedIn a reason to pick HubSpot as a partner. This, in turn, lets HubSpot offer features that less well-connected competitors cannot duplicate. That sets up a positive cycle of growth and expansion that is very much in HubSpot’s favor.
As an aside, the partnerships raise the question of whether Microsoft might just purchase HubSpot and use it to replace or supplement the existing Dynamics CRM products. Makes a lot of sense to me. A Facebook purchase seems unlikely but, as we also learned last week, unlikely things do sometimes happen.
Labels:
crm,
facebook,
freemium,
hubspot,
inbound marketing,
linkedin,
marketing automation
Wednesday, November 09, 2016
ActionIQ Merges Customer Data Without Reformatting
One of the fascinating things about the Customer Data Platform Institute is how developers from different backgrounds have converged on similar solutions. The leaders of ActionIQ, for example, are big data experts: Tasso Argyros founded Aster Data, which was later purchased by Teradata, and Nitay Joffe was a core contributor to HBase and the data infrastructure at Facebook. In their previous lives, both saw marketers struggling to assemble and activate useful customer data. Not surprisingly, they took a database-centric approach to solving the problem.
What particularly sets ActionIQ apart is the ability to work with data from any source in its original structure. The system simply takes a copy of source files as they are, lets users define derived variables based on those files, and uses proprietary techniques to query and segment against those variables almost instantly. It’s the scalability that’s really important here: at one client, ActionIQ scans two billion events in a few seconds. Or, more precisely, it’s the scalability plus flexibility: because all queries work by re-reading the raw data, users can redefine their variables at any time and apply them to all existing data. Or, really, it's scalability, flexibility, and speed, because new data is available within the system in minutes.
So, amongst ActionIQ’s many advantages are scalability, flexibility, and speed. These contrast with systems that require users to summarize data in advance and then either discard the original detail or take much longer to resummarize the data if a definition changes.
ActionIQ presents its approach as offering self-service data access for marketers and other non-technical users. That’s true insofar as marketers work with previously defined variables and audience segments. But defining those variables and segments in the first place takes the same data wrangling skills that analysts have always needed when faced with raw source data. ActionIQ reduces work for those analysts by making it easier to save and reuse their definitions. Its execution speed also reduces the cost of revising those definitions or creating alternate definitions for different purposes. Still, this is definitely a tool for big companies with skilled data analysts on staff.
The system does have some specialized features to support marketing data. These include identity resolution tools including fuzzy matching of similar records (such as different versions of a mailing address) and chaining of related identifiers (such as a device ID linked to an email linked to an account ID). It doesn’t offer “probabilistic” linking of devices that are frequently used in the same location although it can integrate with vendors who do. ActionIQ also creates correlation reports and graphs showing the relationship between pairs of user-specified variables, such as a customer attribute and promotion response. But it doesn’t offer multi-variable predictive models or machine learning.
ActionIQ gives users an interface to segment its data directly. It can also provide a virtual database view that is readable by external SQL queries or PMML-based scoring models. Users can also export audience lists to load into other tools such as campaign managers, Web ad audiences, or Web personalization systems. None of this approaches the power of the multi-step, branching campaign flows of high-end marketing automation systems, but ActionIQ says most of its clients are happy with simple list creation. Like most CDPs, ActionIQ leaves actual message delivery to other products.
The company doesn’t publicly discuss the technical approach it takes to achieve its performance, but they did describe it privately and it makes perfect sense. Skeptics should be comforted by the founders’ technical pedigree and demonstrated actual performance. Similarly, ActionIQ asked me not to share screen shots of their user interface or details of their pricing. Suffice to say that both are competitive.
ActionIQ was founded in 2014 and has been in production with its pilot client for over one year. The company formally launched its product last month.
What particularly sets ActionIQ apart is the ability to work with data from any source in its original structure. The system simply takes a copy of source files as they are, lets users define derived variables based on those files, and uses proprietary techniques to query and segment against those variables almost instantly. It’s the scalability that’s really important here: at one client, ActionIQ scans two billion events in a few seconds. Or, more precisely, it’s the scalability plus flexibility: because all queries work by re-reading the raw data, users can redefine their variables at any time and apply them to all existing data. Or, really, it's scalability, flexibility, and speed, because new data is available within the system in minutes.
So, amongst ActionIQ’s many advantages are scalability, flexibility, and speed. These contrast with systems that require users to summarize data in advance and then either discard the original detail or take much longer to resummarize the data if a definition changes.
ActionIQ presents its approach as offering self-service data access for marketers and other non-technical users. That’s true insofar as marketers work with previously defined variables and audience segments. But defining those variables and segments in the first place takes the same data wrangling skills that analysts have always needed when faced with raw source data. ActionIQ reduces work for those analysts by making it easier to save and reuse their definitions. Its execution speed also reduces the cost of revising those definitions or creating alternate definitions for different purposes. Still, this is definitely a tool for big companies with skilled data analysts on staff.
The system does have some specialized features to support marketing data. These include identity resolution tools including fuzzy matching of similar records (such as different versions of a mailing address) and chaining of related identifiers (such as a device ID linked to an email linked to an account ID). It doesn’t offer “probabilistic” linking of devices that are frequently used in the same location although it can integrate with vendors who do. ActionIQ also creates correlation reports and graphs showing the relationship between pairs of user-specified variables, such as a customer attribute and promotion response. But it doesn’t offer multi-variable predictive models or machine learning.
ActionIQ gives users an interface to segment its data directly. It can also provide a virtual database view that is readable by external SQL queries or PMML-based scoring models. Users can also export audience lists to load into other tools such as campaign managers, Web ad audiences, or Web personalization systems. None of this approaches the power of the multi-step, branching campaign flows of high-end marketing automation systems, but ActionIQ says most of its clients are happy with simple list creation. Like most CDPs, ActionIQ leaves actual message delivery to other products.
The company doesn’t publicly discuss the technical approach it takes to achieve its performance, but they did describe it privately and it makes perfect sense. Skeptics should be comforted by the founders’ technical pedigree and demonstrated actual performance. Similarly, ActionIQ asked me not to share screen shots of their user interface or details of their pricing. Suffice to say that both are competitive.
ActionIQ was founded in 2014 and has been in production with its pilot client for over one year. The company formally launched its product last month.
Thursday, November 03, 2016
Walker Sands / Chief Martech Study: Martech Maturity Has Skyrocketed
Tech marketing agency Walker Sands and industry guru Scott Brinker of Chief Martech yesterday published a fascinating survey on the State of Marketing Technology 2017, which you can download here. The 27 page report provides an insightful analysis of the data, which there’s no point to me duplicating in depth. But I will highlight a couple of findings that are most relevant to my own concerns.
Martech maturity has skyrocketed in the past year. This theme shows up throughout the report. The percentage of responders classifying their companies as innovators or early adopters grew from 20% in 2016 to 48% in 2017; marketers whose companies invest the right amount in marketing technology grew from 50% to 71%; all obstacles to adoption were less common (with the telling exception of not needing anything new).
Truth be told, I find it hard to believe that things can have shifted this much in a single year and that nearly half of all companies (and 60% of individual marketers) are innovators or early adopters. A more likely explanation is the new survey attracted more advanced respondents than before. We might also be seeing a bit of “Lake Wobegon Effect,” named after Garrison Keillor’s mythical town where all the children are above average. Evidence for the latter might be that 69% felt their marketing technology is up to date and sufficient (up from 58%), making this possibly the most complacent group of innovators ever.
Multi-product architectures are most common. I have no problem accepting this one: 21% of respondents said they use a single-vendor suite, while 69% had some sort of multi-vendor approach (27% integrated best-of-breed, 21% fragmented best-of-breed, 21% limited piecemeal solutions). The remainder had no stack (7%) or proprietary technology (4%).
But don’t assume that “single-vendor suite” necessarily means one of the enterprise marketing clouds. Small companies reported using suites just as often as large ones. They were probably referring to all-in-one products like HubSpot and Infusionsoft.
"Best of breed marketers get the most out of their martech tools." That’s a direct quote from the report, but it may overstate the case: 83% of integrated best-of-breed users felt their company was good or excellent at leveraging the stack, compared with 76% of the single-vendor-suite. That not such a huge difference, especially given the total sample of 335. Moreover, companies with fragmented best-of-breed stacks reported less ability (67%) than the single-vendor suite users. If you combine the two best-of-breed groups then the suite users actually come out ahead. A safer interpretation might be that single-vendor suites are no easier to use than best-of-breed combinations. That would still be important news to companies that think pay a premium or compromise on features because they think suites make are easier to deploy.
Integration isn’t that much of a problem. Just 20% of companies cited better stack integration as a key to fully leveraging their tools, which ranked well behind better strategy (39%), better analytics (36%) and more training (33%) and roughly on par with more employees (23%), better defined KPIs (23%), and more data (20%). This supports the previous point about best-of-breed working fairly well, whether or not the stack was well integrated. I would have expected integration to be a bigger issue, so this is a bracing reality check. One interpretation (as I argued last week) is that integration just isn’t as important to marketers as they often claim.
There’s plenty else of interest in the report, so go ahead and read it and form your own opinions. Thanks to Walker Sands and Chief Martech for pulling it together.
Martech maturity has skyrocketed in the past year. This theme shows up throughout the report. The percentage of responders classifying their companies as innovators or early adopters grew from 20% in 2016 to 48% in 2017; marketers whose companies invest the right amount in marketing technology grew from 50% to 71%; all obstacles to adoption were less common (with the telling exception of not needing anything new).
Truth be told, I find it hard to believe that things can have shifted this much in a single year and that nearly half of all companies (and 60% of individual marketers) are innovators or early adopters. A more likely explanation is the new survey attracted more advanced respondents than before. We might also be seeing a bit of “Lake Wobegon Effect,” named after Garrison Keillor’s mythical town where all the children are above average. Evidence for the latter might be that 69% felt their marketing technology is up to date and sufficient (up from 58%), making this possibly the most complacent group of innovators ever.
Multi-product architectures are most common. I have no problem accepting this one: 21% of respondents said they use a single-vendor suite, while 69% had some sort of multi-vendor approach (27% integrated best-of-breed, 21% fragmented best-of-breed, 21% limited piecemeal solutions). The remainder had no stack (7%) or proprietary technology (4%).
But don’t assume that “single-vendor suite” necessarily means one of the enterprise marketing clouds. Small companies reported using suites just as often as large ones. They were probably referring to all-in-one products like HubSpot and Infusionsoft.
"Best of breed marketers get the most out of their martech tools." That’s a direct quote from the report, but it may overstate the case: 83% of integrated best-of-breed users felt their company was good or excellent at leveraging the stack, compared with 76% of the single-vendor-suite. That not such a huge difference, especially given the total sample of 335. Moreover, companies with fragmented best-of-breed stacks reported less ability (67%) than the single-vendor suite users. If you combine the two best-of-breed groups then the suite users actually come out ahead. A safer interpretation might be that single-vendor suites are no easier to use than best-of-breed combinations. That would still be important news to companies that think pay a premium or compromise on features because they think suites make are easier to deploy.
Integration isn’t that much of a problem. Just 20% of companies cited better stack integration as a key to fully leveraging their tools, which ranked well behind better strategy (39%), better analytics (36%) and more training (33%) and roughly on par with more employees (23%), better defined KPIs (23%), and more data (20%). This supports the previous point about best-of-breed working fairly well, whether or not the stack was well integrated. I would have expected integration to be a bigger issue, so this is a bracing reality check. One interpretation (as I argued last week) is that integration just isn’t as important to marketers as they often claim.
There’s plenty else of interest in the report, so go ahead and read it and form your own opinions. Thanks to Walker Sands and Chief Martech for pulling it together.
Friday, October 28, 2016
Singing the Customer Data Platform Blues: Who's to Blame for Disjointed Customer Data?
I’m in the midst of collating data from 150 published surveys about marketing technology, a project that is fascinating and stupefying at the same time. A theme related to marketing data seems to be emerging that I didn’t expect and many marketers won’t necessarily be happy to hear.
Most surveys present a familiar tune: many marketers want unified customer data but few have it. This excerpt from an especially fine study by Econsultancy makes the case clearly although plenty of other studies show something similar.
So far so good. The gap is music to my ears, since helping marketers fill it keeps consultants like me in the business. But it inevitably raises the question of why the gap exists.
The conventional answer is it’s a technology problem. Indeed, this Experian survey makes exactly that point: the top barriers are all technology related.
And, comfortingly, marketers can sing their same old song of blaming IT for failing to deliver what they need. For example, even though 61% of companies in this Forbes Insights survey had a central database of some sort, only 14% had fully unified, accessible data.
But something sounds a little funny. After all, doesn’t marketing now control its own fate? In this Ascend2 report, 61% of the marketing departments said they were primarily responsible for marketing data and nearly all the other marketers said they shared responsibility.
Now we hear that quavering note of uncertainty: maybe it’s marketing’s own fault? That’s something I didn’t expect. And the data seems to support it. For example, a study from Black Ink ROI found that the top barrier to success was better analytics (which implicitly requires better data) and explicitly listed data access as the third-ranked barrier.
But – and here’s the grand finale – the same study found that data integration software ranked sixth on the marketers’ shopping lists. In other words, even though marketers knew they needed better data, they weren’t planning to spend money to make it happen. That’s a sour chord indeed.
But the song isn't over. If we listen closely, we can barely make out one final chorus: marketers won’t invest in data management technology because they don’t have the skills to use it. Or that’s what this survey from Falcon.io seems to suggest.
In its own way, that’s an upbeat ending. Expertise can be acquired through training or hiring outside experts (or possibly even mending some fences with IT). Better tools, like Customer Data Platforms, help by reducing the expertise needed. So while marketers aren't strutting towards a complete customer view with a triumphal Sousa march, there’s no need for a funeral dirge quite yet.
Most surveys present a familiar tune: many marketers want unified customer data but few have it. This excerpt from an especially fine study by Econsultancy makes the case clearly although plenty of other studies show something similar.
So far so good. The gap is music to my ears, since helping marketers fill it keeps consultants like me in the business. But it inevitably raises the question of why the gap exists.
The conventional answer is it’s a technology problem. Indeed, this Experian survey makes exactly that point: the top barriers are all technology related.
And, comfortingly, marketers can sing their same old song of blaming IT for failing to deliver what they need. For example, even though 61% of companies in this Forbes Insights survey had a central database of some sort, only 14% had fully unified, accessible data.
But something sounds a little funny. After all, doesn’t marketing now control its own fate? In this Ascend2 report, 61% of the marketing departments said they were primarily responsible for marketing data and nearly all the other marketers said they shared responsibility.
Now we hear that quavering note of uncertainty: maybe it’s marketing’s own fault? That’s something I didn’t expect. And the data seems to support it. For example, a study from Black Ink ROI found that the top barrier to success was better analytics (which implicitly requires better data) and explicitly listed data access as the third-ranked barrier.
But – and here’s the grand finale – the same study found that data integration software ranked sixth on the marketers’ shopping lists. In other words, even though marketers knew they needed better data, they weren’t planning to spend money to make it happen. That’s a sour chord indeed.
But the song isn't over. If we listen closely, we can barely make out one final chorus: marketers won’t invest in data management technology because they don’t have the skills to use it. Or that’s what this survey from Falcon.io seems to suggest.
In its own way, that’s an upbeat ending. Expertise can be acquired through training or hiring outside experts (or possibly even mending some fences with IT). Better tools, like Customer Data Platforms, help by reducing the expertise needed. So while marketers aren't strutting towards a complete customer view with a triumphal Sousa march, there’s no need for a funeral dirge quite yet.
Wednesday, October 26, 2016
Survey on Customer Data Management - Please Help!
I'm working with MarTech Advisor on a survey to understand the state of customer data management. If you have five minutes or so, could you please fill it out? Link is here. And if possible, pass on to people in other companies who could also help. You'll get a copy of the final report and my gratitude. Just for reading this far, here's a kitten:
Labels:
customer data management,
martech,
survey
Thursday, October 20, 2016
Hull.io Offers A Customer Data Platform for B2B Marketers
The need for a Customer Data Platform – a marketer-controlled, unified, persistent, accessible customer database – applies equally to business and consumer marketing. Indeed, many of the firms I originally identified as CDPs were lead scoring and customer success management vendors who serve primarily B2B clients. But as the category has evolved, I’ve narrowed my filter to only consider CDPs as companies that focus primarily on building the unified data. This excludes the predictive modeling vendors and customer success managers, as well as the big marketing clouds that list a CDP as one of many components. Once you apply that filter, nearly all the remaining firms sell largely to B2C enterprise clients.
Hull.io is an exception. Its clients are mostly small, B2B companies – exactly the firms that were first to adopt software-as-a-service (SaaS) technologies including marketing automation and CRM. This is no accident: SaaS solves one problem by making it easy to acquire new systems, but that creates another problem because those systems are often isolated from each other. Hull addresses that problem by unifying their data, or, more precisely, by synchronizing it.
How it works is this: Hull has connectors for major customer-facing SaaS systems, such as Salesforce, Optimizely, HubSpot, Mailchimp, Facebook custom audiences, Slack, and Zendesk. Users connect with those systems and specify data elements or lists to synchronize. When data changes in one of customer-facing products, the change is sent to Hull which in turn sends it to other products that are tracking that data.
But, unlike data exchanges such as Zapier or Segment, Hull also keeps its own copy of the data. That’s the “persistent” bit of the CDP definition. It gives Hull a place to store data from enhancement vendors including Datanyze and Clearbit, from external processes called through Javascript, and from user-defined custom variables and summary properties, such as days since last visit. Those can be used along with other data to create triggers and define segments within Hull. The segments can then be sent to other systems and updated as they change.
In other words, even though the external systems are not directly reading the data stored within Hull, they can still all work with consistent versions of the data.* Think of it as the martech equivalent of Einstein’s’ “spooky action at a distance” if that clarifies things for you.
To extend its reach even further, Hull.io can also integrate with Zapier and Segment allowing it to exchange data with the hundreds of systems those products support.
Three important things have to happen inside of Hull.io to provide a unified customer view. First, it has to map data from different sources to a common data model – so that things like customer name or product ID are recognized as referring to the same entities even if they come from different places. Hull.io simplifies this as much as possible by limiting its internal data model to two entities, customers and events. Input data, no matter how complicated, is converted to these entities by splitting each record into components that are tagged with their original meaning and relationships. The splitting and tagging are automatic, which is very important for making the system easy to deploy and maintain. Users still need to manually tell the system which elements from different systems should map to the same element in the shared data.
The second important thing is translating stored data into the structure needed by the receiving system. This is the reverse of the data loading process, since complex records must be assembled from the simplified internal model. What’s tricky is that the output format is almost always different from the input format, so the pieces have to be reassembled in a different format. While we’re making questionably helpful analogies, think of this as the Jive Lady translating for the sick passenger in the movie Airplane.
The third key thing is that data relating to the same customer needs to be linked. Hull will do “deterministic” matching to stitch together identities where overlapping information is available – such as, connecting an account ID to a device when someone uses that device to log into their account. Like many other CDPs, Hull doesn’t attempt “probabilistic” matching, which looks for patterns in behavior or data to associate identifiers that are likely to belong to the same person. It does use IP address to associate visitors with businesses, even if the individual is anonymous.
All told, this adds up to a respectable set of CDP features. But Hull co-founder Romain Dardour says few clients actually come to the company looking for a unified, persistent customer database. Rather, they are trying to create specific processes, such as using Slack to send notifications of support tickets from Zendesk. Hull has built a collection of these processes, which it calls recipes. Customers can use an existing recipe or design their own. Dardour said that once clients deploy a few recipes they usually recognize the broader possibilities of the system and migrate towards thinking of it as a true CDP, even if they still don’t use the term.
This is consistent with what I’ve seen elsewhere. Big enterprises can afford to purchase a unified customer database by itself, but smaller firms often want their CDP to include a specific money-making application. That’s why my original B2B CDPs usually included applications like lead scoring and customer success, while the B2C enterprise CDPs often did not.
The other big divide between Hull and enterprise CDPs is cost. Most enterprise CDPs start somewhere between $100,000 and $250,000 per year and can easily reach seven figures. Hull starts as low as $500 per month, with a current average of about $1,000 and the largest clients topping out around $10,000. Price is based primarily on the number of system connections, with some adjustments for number of contact records, guaranteed response time, data retention period, and special features. Hull has over 1,000 clients, mostly in the U.S. but with world-wide presence. It was founded in 2013.
_________________________________________________________________________________
*You could argue that because the external systems are not reading Hull.io’s data directly, it doesn’t truly qualify as a CDP. I’d say it’s not worth the quibble – although if really massive amounts of data were involved, it might be significant. Remember that Hull.io is dealing with smaller businesses, where replicating all the relevant data is not a huge burden.
Hull.io is an exception. Its clients are mostly small, B2B companies – exactly the firms that were first to adopt software-as-a-service (SaaS) technologies including marketing automation and CRM. This is no accident: SaaS solves one problem by making it easy to acquire new systems, but that creates another problem because those systems are often isolated from each other. Hull addresses that problem by unifying their data, or, more precisely, by synchronizing it.
How it works is this: Hull has connectors for major customer-facing SaaS systems, such as Salesforce, Optimizely, HubSpot, Mailchimp, Facebook custom audiences, Slack, and Zendesk. Users connect with those systems and specify data elements or lists to synchronize. When data changes in one of customer-facing products, the change is sent to Hull which in turn sends it to other products that are tracking that data.
But, unlike data exchanges such as Zapier or Segment, Hull also keeps its own copy of the data. That’s the “persistent” bit of the CDP definition. It gives Hull a place to store data from enhancement vendors including Datanyze and Clearbit, from external processes called through Javascript, and from user-defined custom variables and summary properties, such as days since last visit. Those can be used along with other data to create triggers and define segments within Hull. The segments can then be sent to other systems and updated as they change.
In other words, even though the external systems are not directly reading the data stored within Hull, they can still all work with consistent versions of the data.* Think of it as the martech equivalent of Einstein’s’ “spooky action at a distance” if that clarifies things for you.
To extend its reach even further, Hull.io can also integrate with Zapier and Segment allowing it to exchange data with the hundreds of systems those products support.
Three important things have to happen inside of Hull.io to provide a unified customer view. First, it has to map data from different sources to a common data model – so that things like customer name or product ID are recognized as referring to the same entities even if they come from different places. Hull.io simplifies this as much as possible by limiting its internal data model to two entities, customers and events. Input data, no matter how complicated, is converted to these entities by splitting each record into components that are tagged with their original meaning and relationships. The splitting and tagging are automatic, which is very important for making the system easy to deploy and maintain. Users still need to manually tell the system which elements from different systems should map to the same element in the shared data.
The second important thing is translating stored data into the structure needed by the receiving system. This is the reverse of the data loading process, since complex records must be assembled from the simplified internal model. What’s tricky is that the output format is almost always different from the input format, so the pieces have to be reassembled in a different format. While we’re making questionably helpful analogies, think of this as the Jive Lady translating for the sick passenger in the movie Airplane.
The third key thing is that data relating to the same customer needs to be linked. Hull will do “deterministic” matching to stitch together identities where overlapping information is available – such as, connecting an account ID to a device when someone uses that device to log into their account. Like many other CDPs, Hull doesn’t attempt “probabilistic” matching, which looks for patterns in behavior or data to associate identifiers that are likely to belong to the same person. It does use IP address to associate visitors with businesses, even if the individual is anonymous.
All told, this adds up to a respectable set of CDP features. But Hull co-founder Romain Dardour says few clients actually come to the company looking for a unified, persistent customer database. Rather, they are trying to create specific processes, such as using Slack to send notifications of support tickets from Zendesk. Hull has built a collection of these processes, which it calls recipes. Customers can use an existing recipe or design their own. Dardour said that once clients deploy a few recipes they usually recognize the broader possibilities of the system and migrate towards thinking of it as a true CDP, even if they still don’t use the term.
This is consistent with what I’ve seen elsewhere. Big enterprises can afford to purchase a unified customer database by itself, but smaller firms often want their CDP to include a specific money-making application. That’s why my original B2B CDPs usually included applications like lead scoring and customer success, while the B2C enterprise CDPs often did not.
The other big divide between Hull and enterprise CDPs is cost. Most enterprise CDPs start somewhere between $100,000 and $250,000 per year and can easily reach seven figures. Hull starts as low as $500 per month, with a current average of about $1,000 and the largest clients topping out around $10,000. Price is based primarily on the number of system connections, with some adjustments for number of contact records, guaranteed response time, data retention period, and special features. Hull has over 1,000 clients, mostly in the U.S. but with world-wide presence. It was founded in 2013.
_________________________________________________________________________________
*You could argue that because the external systems are not reading Hull.io’s data directly, it doesn’t truly qualify as a CDP. I’d say it’s not worth the quibble – although if really massive amounts of data were involved, it might be significant. Remember that Hull.io is dealing with smaller businesses, where replicating all the relevant data is not a huge burden.
Friday, October 14, 2016
Datorama Applies Machine Intelligence to Speed Marketing Analytics
As I mentioned a couple of posts back, I’ve been surveying the borders of Customer Data Platform-land recently, trying to figure out which vendors fit within the category and which do not. Naturally, there are cases where the answer isn’t clear. Datorama is one of them.
At first glance, you’d think Datorama is definitely not a CDP: it positions itself as a “marketing analytics platform” and makes clear that its primary clients are agencies, publishers, and corporate marketers who want to measure advertising performance. But the company also calls itself a “marketing integration engine” that works with “all of your data”, which certainly goes beyond just advertising. Dig a bit deeper and the confusion just grows: the company works mostly with aggregated performance data, but also works with some individual-level data. It doesn’t currently do identity resolution to build unified customer profiles, but is moving in that direction. And it integrates with advertising and Web analytics data on one hand and social listening, marketing automation, and CRM on the other. So while Datorama wasn’t built to be a CDP – because unified customer profiles are the core CDP feature – it may be evolving towards one.
This isn't to say that Datorama lacks focus. The system was introduced in 2012 and now has over 2,000 clients, including brands, agencies, and publishers. It grew by solving a very specific problem: the challenges that advertisers and publishers face in combining information about ad placements and results. Its solution was to automate every step of the marketing measurement process as much as it could, using machine intelligence to identify information within new data sources, map those to a standard data model, present the results in dashboards, and uncover opportunities for improvement. In other words, Datorama gives marketers one system for everything from data ingestion to consolidation to delivery to analytics. This lets them manage a process that would otherwise require many different products and lots of technical support. That approach – putting marketers in control by giving them a system pre-tailored to their needs – is very much the CDP strategy.
Paradoxically, the main result of Datorama’s specialization is flexibility. The system’s developers set of goal of handling any data source, which led to a system that can ingest nearly any database type, API feed or file format, including JSON and XML; automatically identify the contents of each field; and map the fields to the standard data model. Datorama keeps track of what it learns about common source systems, like Facebook, Adobe Analytics, or AppNexus, making it better at mapping those sources for future implementations. It can also clean, transform, classify, and reformat the inputs to make them more usable, applying advanced features like rules, formulas, and sentiment analysis. At the other end of the process, machine learning builds predictive models to do things like estimate lifetime value and forecast campaign results. The results can be displayed in Datorama’s own interface, read by business intelligence products like Tableau, or exported to other systems like marketing automation.
Datorama’s extensive use of machine learning lets it speed up the marketing analytics process while reducing the cost. But this is still not a push-button solution. The vendor says a typical proof of concept usually takes about one month, and it takes another one to two months more to convert the proof of concept into a production deployment. That’s faster than your father’s data warehouse but not like adding an app to your iPhone. Pricing is also non-trivial: a small company will pay in the five figures for a year’s service and a large company's bill could reach into seven figures. Fees are based on data volume and number of users. Datorama can also provide services to help users get set up or to run the system for them if they prefer.
At first glance, you’d think Datorama is definitely not a CDP: it positions itself as a “marketing analytics platform” and makes clear that its primary clients are agencies, publishers, and corporate marketers who want to measure advertising performance. But the company also calls itself a “marketing integration engine” that works with “all of your data”, which certainly goes beyond just advertising. Dig a bit deeper and the confusion just grows: the company works mostly with aggregated performance data, but also works with some individual-level data. It doesn’t currently do identity resolution to build unified customer profiles, but is moving in that direction. And it integrates with advertising and Web analytics data on one hand and social listening, marketing automation, and CRM on the other. So while Datorama wasn’t built to be a CDP – because unified customer profiles are the core CDP feature – it may be evolving towards one.
This isn't to say that Datorama lacks focus. The system was introduced in 2012 and now has over 2,000 clients, including brands, agencies, and publishers. It grew by solving a very specific problem: the challenges that advertisers and publishers face in combining information about ad placements and results. Its solution was to automate every step of the marketing measurement process as much as it could, using machine intelligence to identify information within new data sources, map those to a standard data model, present the results in dashboards, and uncover opportunities for improvement. In other words, Datorama gives marketers one system for everything from data ingestion to consolidation to delivery to analytics. This lets them manage a process that would otherwise require many different products and lots of technical support. That approach – putting marketers in control by giving them a system pre-tailored to their needs – is very much the CDP strategy.
Paradoxically, the main result of Datorama’s specialization is flexibility. The system’s developers set of goal of handling any data source, which led to a system that can ingest nearly any database type, API feed or file format, including JSON and XML; automatically identify the contents of each field; and map the fields to the standard data model. Datorama keeps track of what it learns about common source systems, like Facebook, Adobe Analytics, or AppNexus, making it better at mapping those sources for future implementations. It can also clean, transform, classify, and reformat the inputs to make them more usable, applying advanced features like rules, formulas, and sentiment analysis. At the other end of the process, machine learning builds predictive models to do things like estimate lifetime value and forecast campaign results. The results can be displayed in Datorama’s own interface, read by business intelligence products like Tableau, or exported to other systems like marketing automation.
Datorama’s extensive use of machine learning lets it speed up the marketing analytics process while reducing the cost. But this is still not a push-button solution. The vendor says a typical proof of concept usually takes about one month, and it takes another one to two months more to convert the proof of concept into a production deployment. That’s faster than your father’s data warehouse but not like adding an app to your iPhone. Pricing is also non-trivial: a small company will pay in the five figures for a year’s service and a large company's bill could reach into seven figures. Fees are based on data volume and number of users. Datorama can also provide services to help users get set up or to run the system for them if they prefer.
Tuesday, October 04, 2016
News from Krux, Demandbase, Radius: Customer Data Takes Center Stage
If Dreamforce seems a little less crowded than you expected this week, perhaps it's because I didn’t attend. But I’m still tracking the news from Salesforce and other vendors from my cave in Philadelphia. Three announcements caught my eye, all highlighting the increasing attention being paid to customer data.
Salesforce itself had the biggest news yesterday, with its agreement to purchase Krux, a data management platform that has expanded well beyond the core DMP function of assembling audiences from cookie pools. Krux now has an “intelligent marketing hub” that can also load a company’s own data from CRM, Websites, mobile apps, and offline sources, and unify customer data to build complete cross-channel profiles. Krux also allows third party data owners to sell their data through the Krux platform and offers self-service data science for exploration and predictive models. The purchase makes great strategic sense for Salesforce, providing it with a DMP to match existing components in the Oracle and Adobe marketing clouds. But beyond the standard DMP function of generating advertising audiences, Krux gives Salesforce a solid customer data foundation to support all kinds of marketing management. In particular, it goes beyond the functions in Salesforce ExactTarget, which was previously the designated core marketing database for Salesforce Marketing Cloud. To be clear, there’s no campaign management or journey orchestration within Krux; those functions would be performed by other systems that simply draw on Krux data. Which is exactly as it should be, if marketers are to maintain maximum flexibility in their tools.
Demandbase had its own announcement yesterday: something it calls “DemandGraph,” which is basically a combination of Demandbase’s existing business database with data gathering and analytical functions the Spiderbook system that Demandbase bought in May 2016. DemandGraph isn’t exactly a product but rather a resource that Demandbase will use to power other products. It lets Demandbase more easily build detailed profiles of people and companies, including history, interests, and relationships. It can then use the information to predict future purchases and guide marketing and sales messages. There’s also a liberal sprinkling of artificial intelligence throughout DemandGraph, used mostly in Spiderbook’s processing of unstructured Web data but also in some of the predictive functions. If I’m sounding vague here it’s because, frankly, so was Demandbase. But it’s still clear that DemandGraph represents a major improvement in the power and scope of data available to business marketers.
Predictive marketing vendor Radius made its announcement last week of the Radius Customer Exchange. This uses the Radius Business Graph database (notice a naming trend here?) to help clients identify shared customers without exposing their entire files to each other. Like Spiderbook, Radius gathers much of its data by scanning the public Web; however, Radius Business Graph also incorporates data provided Radius clients. The client data provides continuous, additional inputs that Radius says makes its data and matching much more accurate than conventional business data sources. Similarly, while there’s nothing new about using third parties to find shared customers, the Radius Customer Exchange enables sharing in near real time, gives precise revocable control over what is shared, and incorporates other information such as marketing touches and predictive models. These are subtle but significant improvements that make data-driven marketing more effective than ever. The announcement also supports a slight shift in Radius’ position from “predictive modeling” (a category that has lost some of its luster in the past year) to “business data provider”, a category that seems especially enticing after Microsoft paid $26.2 billion for LinkedIn.
Do these announcements reflect a change in industry focus from marketing applications to marketing data? I’m probably too data-centric to be an objective judge, but a case could be made. If so, I’d argue it’s a natural development as marketers look beyond the endless supply of sparkly new Martech applications to the underlying foundations needed to support them. In the long run, a solid foundation makes it easier to dance creatively along the surface: so I’d rate a new data-driven attitude as a Good Thing.
Friday, September 30, 2016
Reltio Makes Enterprise Data Usable, and Then Uses It
I’ve spent a lot of time recently talking to Customer Data Platform vendors, or companies that looked like they might be. One that sits right on the border is Reltio, which fits the CDP criteria* but goes beyond customer data to all types of enterprise information. That puts it more in the realm of Master Data Management, except that MDM is highly technical while Reltio is designed to be used by marketers and other business people. You might call it “self-service MDM” but that’s an oxymoron right up there with “do-it-yourself brain surgery”.
Or not. Reltio avoids the traditional complexity of MDM in part by using the Cassandra data store, which is highly scalable and can more easily add new data types and attributes than standard relational databases. Reltio works with a simple data model – or graph schema if you prefer – that captures relationships among basic objects including people, organizations, products, and places. It can work with data from multiple sources, relying on partner vendors such as SnapLogic and MuleSoft for data acquisition and Tamr, Alteryx, and Trifacta for data preparation. It has its own matching algorithms to associate related data from different sources. As for the do-it-yourself bit: well, there’s certainly some technical expertise needed to set things up, but Reltio's services team generally does the hard parts for its clients. The point is that Reltio reduces the work involved – while adding a new source to a conventional data warehouse can easily take weeks or months, Reltio says it can add a new source to an existing installation in one day.
The result is a customer profile that contains pretty much any data the company can acquire. This is where the real fun begins, because that profile is now available for analysis and applications. These can also be done in Reltio itself, using built-in machine learning and data presentation tools to provide deep views into customers and accounts, including recommendations for products and messages. A simple app might take one or two months to build; a complicated app might take three or four months. The data is also available to external systems via real-time API calls.
Reltio is a cloud service, meaning the system doesn’t run on the client’s own computers. Pricing depends on the number of users and profiles managed but not the number of sources or data volume. The company was founded in 2011 and released its product several years later. Its clients are primarily large enterprises in retail, media, and life sciences.
______________________________________________________________________
* marketer-controlled; multi-source unified persistent data; accessible to external systems
Or not. Reltio avoids the traditional complexity of MDM in part by using the Cassandra data store, which is highly scalable and can more easily add new data types and attributes than standard relational databases. Reltio works with a simple data model – or graph schema if you prefer – that captures relationships among basic objects including people, organizations, products, and places. It can work with data from multiple sources, relying on partner vendors such as SnapLogic and MuleSoft for data acquisition and Tamr, Alteryx, and Trifacta for data preparation. It has its own matching algorithms to associate related data from different sources. As for the do-it-yourself bit: well, there’s certainly some technical expertise needed to set things up, but Reltio's services team generally does the hard parts for its clients. The point is that Reltio reduces the work involved – while adding a new source to a conventional data warehouse can easily take weeks or months, Reltio says it can add a new source to an existing installation in one day.
The result is a customer profile that contains pretty much any data the company can acquire. This is where the real fun begins, because that profile is now available for analysis and applications. These can also be done in Reltio itself, using built-in machine learning and data presentation tools to provide deep views into customers and accounts, including recommendations for products and messages. A simple app might take one or two months to build; a complicated app might take three or four months. The data is also available to external systems via real-time API calls.
Reltio is a cloud service, meaning the system doesn’t run on the client’s own computers. Pricing depends on the number of users and profiles managed but not the number of sources or data volume. The company was founded in 2011 and released its product several years later. Its clients are primarily large enterprises in retail, media, and life sciences.
______________________________________________________________________
* marketer-controlled; multi-source unified persistent data; accessible to external systems
Monday, September 19, 2016
History of Marketing Technology and What's Special about Journey Orchestration
I delivered my presentation on the history of marketing technology last week at the Optimove CONNECT conference in Tel Aviv. Sadly, the audience didn’t seem to share my fascination with arcana (did you know that the Chinese invented paper in 100 CE? that Return on Investment analysis originated at DuPont in 1912?) So, chastened a bit, I’ll share with you a much-condensed version of my timeline, leaving out juicy details like brothel advertising at Pompeii.
The timeline* traces three categories: marketing channels; tools used by marketers to manage those channels; and data available to marketers. The yellow areas represent the volume of technology available during each period. Again skipping over my beloved details, there are two main points:
It’s not surprising the transition took so long. As I described in my earlier post on the adoption of electric power by factories (more arcana!), the shift to new technology happens in stages as individual components of a process are changed, which then opens a path to changing other components, until finally all the old components are gone and new components are deployed in a configuration optimized for the new capabilities. In the transition from campaign management to journey orchestration, marketers had to develop tools to track individuals over time, to personalize messages to those individuals, identify and optimize individual journeys, act on complete data in real time, and to incorporate masses of unstructured data. Each of those transitions involved a technology change: from lists to databases, from static messages to dynamic content, from segment-level descriptive analytics to individual-level predictions, from batch updates to real time processes, and from relational databases to “big data” stores.
It’s really difficult to retrofit old systems with new technologies, which is one reason vendors like Oracle and IBM keep buying new companies to supplement current products. It’s also why the newest systems tend to be the most advanced.** Thus, the Journey Orchestration Engines I’ve written about previously (Thunderhead ONE , Pointillist, Usermind, Hive9 ) all use NoSQL data stores, build detailed individual-level customer histories, and track individuals as they move from state to state within a journey flow.
During my Tel Aviv visit last week, I also checked in with Pontis (just purchased by Amdocs), who showed me their own new tool which does an exceptionally fine job at ingesting all kinds of data, building a unified customer history, and coordinating treatments across all channels, all in real time. In true JOE fashion, the system selects the best treatment in each situation rather than pushing customers down predefined campaign sequences. Pontis also promised their February release would use machine learning to pick optimal messages and channels during each treatment. Separately, Optimove itself announced its own “Optibot” automation scheme, which also finds the best treatments for individuals as they move from state to state. So you can add Optimove to your cup of JOEs (sorry) as well.
I’m reluctant to proclaim JOEs as the final stage in customer management evolution only because it’s too soon to know if more change is on the way. As Pontis and Optimove both illustrate, the next step may be using automation to select customer treatments and ultimately to generate the framework that organizes those treatments. When that happens, we will have erased the last vestiges of the list- and campaign-based approaches that date back to the mail order pioneers of the 19th century and to the ancient Sumerians (first customer list, c. 3,000 BCE) before that.
_________________________________________________________________________________
*Dates represent commercialization, not the first appearance of the underlying technology. For example, we all know that Gutenberg’s press with moveable type was introduced around 1450, but newspapers with advertising didn’t show up until after 1600.
** This isn’t quite as tautological as it sounds. In some industries, deep-pocketed old vendors with big research budgets are the technical leaders.
The timeline* traces three categories: marketing channels; tools used by marketers to manage those channels; and data available to marketers. The yellow areas represent the volume of technology available during each period. Again skipping over my beloved details, there are two main points:
- although the number of marketing channels increased dramatically during the industrial age (adding mass print, direct mail, radio, television, and telemarketing), there was almost no growth in marketing technology or data until computers were applied to list management in the 1970’s. The real explosions in martech and data happen after the Internet appears in the 1990’s.
- the core martech technology, campaign management, begins in the 1980’s: that is, it predates the Internet. In fact, campaign management was originally designed to manage direct mail lists (and – arcana alert! – itself mimicked practices developed for mechanical list technologies such as punch cards and metal address plates). Although marketers have long talked about being customer- rather than campaign-centric, it’s not until the current crop of Journey Orchestration Engines (JOEs) that we see a thorough replacement of campaign-based methods.
It’s not surprising the transition took so long. As I described in my earlier post on the adoption of electric power by factories (more arcana!), the shift to new technology happens in stages as individual components of a process are changed, which then opens a path to changing other components, until finally all the old components are gone and new components are deployed in a configuration optimized for the new capabilities. In the transition from campaign management to journey orchestration, marketers had to develop tools to track individuals over time, to personalize messages to those individuals, identify and optimize individual journeys, act on complete data in real time, and to incorporate masses of unstructured data. Each of those transitions involved a technology change: from lists to databases, from static messages to dynamic content, from segment-level descriptive analytics to individual-level predictions, from batch updates to real time processes, and from relational databases to “big data” stores.
It’s really difficult to retrofit old systems with new technologies, which is one reason vendors like Oracle and IBM keep buying new companies to supplement current products. It’s also why the newest systems tend to be the most advanced.** Thus, the Journey Orchestration Engines I’ve written about previously (Thunderhead ONE , Pointillist, Usermind, Hive9 ) all use NoSQL data stores, build detailed individual-level customer histories, and track individuals as they move from state to state within a journey flow.
During my Tel Aviv visit last week, I also checked in with Pontis (just purchased by Amdocs), who showed me their own new tool which does an exceptionally fine job at ingesting all kinds of data, building a unified customer history, and coordinating treatments across all channels, all in real time. In true JOE fashion, the system selects the best treatment in each situation rather than pushing customers down predefined campaign sequences. Pontis also promised their February release would use machine learning to pick optimal messages and channels during each treatment. Separately, Optimove itself announced its own “Optibot” automation scheme, which also finds the best treatments for individuals as they move from state to state. So you can add Optimove to your cup of JOEs (sorry) as well.
I’m reluctant to proclaim JOEs as the final stage in customer management evolution only because it’s too soon to know if more change is on the way. As Pontis and Optimove both illustrate, the next step may be using automation to select customer treatments and ultimately to generate the framework that organizes those treatments. When that happens, we will have erased the last vestiges of the list- and campaign-based approaches that date back to the mail order pioneers of the 19th century and to the ancient Sumerians (first customer list, c. 3,000 BCE) before that.
_________________________________________________________________________________
*Dates represent commercialization, not the first appearance of the underlying technology. For example, we all know that Gutenberg’s press with moveable type was introduced around 1450, but newspapers with advertising didn’t show up until after 1600.
** This isn’t quite as tautological as it sounds. In some industries, deep-pocketed old vendors with big research budgets are the technical leaders.
Subscribe to:
Posts (Atom)