Sunday, October 07, 2018

How to Build a CDP RFP Generator

Recent discussions with Customer Data Platform buyers and vendors have repeatedly circled around a small set of questions:
  • what are the use cases for CDP? (This really means, when should you use a CDP and when should you use something else?)
  • what are the capabilities of a CDP? (This really means, what are the unique features I’ll find only in a CDP? It might also mean, what features do all CDPs share and which are found in some but not others?)
  • which CDPs have which capabilities? (This really means, which CDPs match my requirements?)
  • can someone create a standardized CDP Request for Proposal? (This comes from vendors who are now receiving many poorly written CDP RFPs.)
These questions are intertwined: use cases determine the capabilities users need; requirements are the heart of an RFP, and finding which vendors have which capabilities is the goal of vendor selection. These connections suggest the questions could all be answered as part of one (complicated) solution. This might involve:

1. defining a set of common CDP use cases
2. identifying the CDP capabilities required to support each use case
3. identifying the capabilities available in specific CDPs
4. having buyers specify which use cases they want to support
5. auto-generating an RFP that lists the requirements for the buyer’s use cases
6. creating a list of vendors whose capabilities match those requirements

What’s interesting is that steps 1-3 describe information that could be assembled once and used by all buyers, while steps 5 and 6 are purely mechanical. So only step 4 (picking use cases) requires direct buyer input.  This means the whole process could be made quite easy.

(Actually, there’s one more bit of buyer input, which is to specify which capabilities they already have available in existing systems. Those capabilities can then be excluded from the RFP requirements. The capabilities list could also be extended to non-CDP capabilities, since most use cases will involve other systems that could have their own gaps.  These nuances don’t change the basic process.)

As a sanity check, I’ve built a small Proof of Concept for this approach using an Excel spreadsheet.  I'm happy to say it works quite nicely.  I'll share a simplified version here to illustrate how it works.  In particular, I'll show just a few capabilities, use cases, and (anonymous) vendors. .

We’ll start with the static data.

The columns are:
  • Capability: a system capability.
  • Description: a description of the capability. This can both help users understand what it is and be a requirement in the resulting RFP. Or, we could create separate RFP language for each capability. This could go into more detail about the required features.
  • CDP Feature: indicates whether the capability would be found in at least some CDPs. The CDP RFP can ignore features that aren't part of the CDP, but it's still important to identify them because they could create a gap that makes the use case impossible.  For example, consider the first row in the sample table, whether the Web system can accept external input.  This isn't a CDP feature but it's needed to deliver the use case for Real time web interactions.
  • Use Cases: shows which capabilities are needed for which use case. For items that relate to a specific channel, each channel would be a separate use case.  In the sample table, Single Source Access is specifically related to the Point of Sale channel while Real Time Interactions are specifically related to Web
  • Vendor Capabilities: these indicate whether a particular vendor provides a particular capability.

The second table looks at the items that depend on user input. The only direct user inputs are to choose which use cases apply (not shown here) and to indicate which capabilities already exist in current systems.  All other items are derived from those inputs and the static data.
The columns are:

  • Nbr Use Cases Needing: this shows how many use cases require this capability. It’s the sum of the capability values for the selected use cases.
  • Already Have: this is the user’s input, showing which of the required capabilities are already available. In the sample table, the last row (site tag) is an existing capability.  Since it exists, you can leave it out of the RFP.
  • Nbr Gaps: the number of use cases that need the capability, excluding capabilities that are already available. These are gaps. Using the number of cases, rather than a simple 1 or 0, provides some sense of how important it is to fill each gap.
  • Nbr CDP Gaps: the number of gap use cases that might be enabled by a CDP. The first row iIn the example, Web – accept input (ability of a Web site to accept external input) isn’t a CDP attribute, so this value is set to zero.
  • Gaps Filled by Vendor: the number of CDP Gaps filled by each vendor, based on the vendor capabilities. A total at the bottom of each column shows the sum for all capabilities for each vendor.  This gives a rough indicator of which vendors are the best fit for a particular user.

The main outputs of this process would be:

  • List of gaps, prioritized by how many use cases each gap is blocking and divided into gaps that a CDP could address and gaps that need to be addressed by other systems.
  • List of CDP requirements, which is easily be transformed into an RFP. A complete RFP would have additional questions such as vendor background and pricing.  But these are pretty much the same for all companies so they can be part of a standard template. The only other input needed from the buyer is information about her own company and goals. And even some goal information is implicit in the use cases selected.
  • List of CDP vendors to consider, including which vendors fill which gaps and which have the best over-all fit (i.e., fill the most gaps). This depends on having complete and accurate vendor information and will be a sensitive topic with vendors who hate to be excluded from consideration before they can talk to a potential buyer.  So it's something we might not do right away.  But it’s good to know it’s an option.

Beyond the Basics

We could further refine the methodology by assigning weights to different use cases, to capabilities within each use case, to existing capabilities, and to vendor capabilities. This would give a much more nuanced representation of real-world complexity. Most of this could be done within the existing framework by assigning fractions rather than ones and zeros to the tables shown above. I’m not sure how much added value users would get from the additional work, in particular given how uncertain many of the answers would be.

We could also write some rules to make general observations and recommendations based on the inputs, such as what to prioritize. We could even add a few more relevant questions to better assess the resources available to each user and further refine the recommendations. That would also be pretty easy and we could easily expand the outputs over time.

What’s Next?

But first things first. While I think I’ve solved the problem conceptually, the real work is just beginning. We need to refine the capability categories, create proper RFP language for each category, define an adequate body of use cases, map each use case to the required capabilities, create the RFP template, and research individual vendor capabilities. I’ll probably hold off on the last item because of the work involved and the potential impact of any errors.

Of course, we can refine all these items over time.  The biggest initial challenge is transforming my Excel sheet into a functioning Web application. Any decent survey tool could gather the required input but I’m not aware of one that can do the subsequent processing and results presentation. A more suitable class of system would be the interactive content products used to generate quotes and self-assessments. There are lots of these and it will be a big project to sort through them. We’ll also be constrained by cost: anything over $100 a month will be a stretch. If anybody reading this has suggestions, please send me an email.

In the meantime, I’ll continue working with CDP Institute Sponsors and others to refine the categories, use cases, and other components. Again, anyone who wants to help out is welcome to participate.

This is a big project.  But it directly addresses several of the key challenges facing CDP users today. I look forward to moving ahead.

Tuesday, September 25, 2018

Salesforce Customer 360 Solution to Share Data Without a Shared Database

Salesforce has sipped the Kool-Aid: it led off the Dreamforce conference today with news of Customer 360, which aims to “help companies move beyond an app- or department-specific view of each customer by making it easier to create a single, holistic customer profile to inform every interaction”.

But they didn’t drink the whole glass. Customer 360 isn't assembling a persistent, unified customer database as described in the Customer Data Platform Institute's CDP definition.  Instead, they are building connections to data that remains its original systems – and proud of it. As Senior Vice President of Product Management Patrick Stokes says in a supporting blog post, “People talk about a ‘single’ view of the customer, which implies all of this data is stored somewhere centrally, but that's not our philosophy. We believe strongly that a graph of data about your customer combined with a standard way for each of your applications to access that data, without dumping all the data into the same repository, is the right technical approach.”

Salesforce gets lots of points for clarity.  Other things they make clear include:
  • Customer 360 is in a closed pilot release today with general availability in 2019. (Okay, saying when in 2019 might be helpful, but we’ll take it.)
  • Customer 360 currently unifies Salesforce’s B2C products, including Marketing, Customer Service, and Commerce. (Elsewhere, Salesforce does make the apparently conflicting assertions that “For customers of Salesforce B2B products, all information is in one place, in a single data model for marketing, sales, B2B commerce and service” and “Many Salesforce implementations on the B2B side, especially those with multi-org deployments, could be improved with Customer 360.” Still, the immediate point is clear.)
  • Customer 360 will include an identity resolution layer to apply a common customer ID to data in the different Salesforce systems.  (We need details but presumably those will be forthcoming.)
  • Customer 360 is only about combining data within Salesforce products, but can be extended to include API connections with other systems through Salesforce Mulesoft.  (Again, we need details.)
  • Customer 360 is designed to let Salesforce admins set up connections between the related systems: it's not an IT tool (no coding is needed) and it's not for end-users (only some prebuilt packages with particular data mappings are available).
So we have a pretty good idea of what Salesforce is doing. The question is, are they doing the right thing?

I say they’re not. The premise of the Customer Data Platform approach is that customer data needs to be extracted into a shared central database. The fundamental reason is that having the data in one place makes it easier to access because you’re not querying multiple source systems and potentially doing additional real-time processing when data is needed. A central database can do all this work in advance, enabling faster and more consistent response, and place the data in structures that are most appropriate for particular business needs.  Indeed, a CDP can maintain the same data in different structures to support different purposes. It also can retain data that might be lost in operational systems, which frequently overwrite information such as customer status or location. This information can be important to understand trends, behavior patterns, or past context needed to build effective predictions. On a simpler level, a persistent database can accept batch file inputs from source systems that don’t allow an API connection or – and this is very common – from systems whose managers won’t allow direct API connections for fear of performance issues.

Obviously these arguments are familiar to the people at Salesforce, so you have to ask why they chose to ignore them – or, perhaps better stated, what arguments they felt outweighed them. I have no inside information but suspect the fundamental reason Salesforce has chosen not to support a separate CDP-style repository is the data movement required to support that would be very expensive given the designs of their current products, creating performance issues and costs their clients wouldn't accept.  It’s also worth noting that the examples Salesforce gives for using customer data mostly relate to real-time interactions such as phone agents looking up a customer record.  In those situations, access to current data is essential: providing it through real-time replication would be extremely costly while reading it directly from source systems is quite simple.  So if Salesforce feels real-time interactions are the primary use case for central customer data, it makes sense to take their approach and sacrifice the historical perspective and improved analytics that a separate database can provide.

It’s interesting to contrast Salesforce’s approach with yesterday’s Open Source Initiatve announcement from Adobe, Microsoft and SAP.  That group has taken exactly the opposite tack, developing a plan to extract data from source systems and load it into an Azure database. This is a relatively new approach for Adobe, which until recently argued – as Salesforce still does – that creating a common ID and accessing data in place was enough. That they tried and abandoned this method suggests that they found it won’t meet customer needs.  It could even be cited as evidence that Salesforce will eventually reach the same conclusion. But it’s also worth noting that Adobe’s announcement focused primarily on analytical uses of the unified customer data and their strongest marketing product is Web analytics. Conversely, Salesforce’s heritage is customer interactions in CRM and email. So it may be that each vendor has chosen the approach which best supports its core products.

(An intriguing alternative explanation, offered by TechCrunch, is that Adobe, Microsoft and SAP have created a repository specifically to make it easier for clients to extract their data from Salesforce. I’m not sure I buy this but the same logic would explain why Salesforce has chosen an approach that keeps the core data locked safely within existing Salesforce systems.)

I’ll state the obvious by pointing out that companies need both analytics and interactions. We already know that many CDPs can access data in place, most commonly applied to information such as location or weather which changes constantly and is only relevant when an interaction occurs. So a hybrid approach is already common (though not universal) in the CDP world. Salesforce does say
that “Customer 360 creates and stores a customer profile”, so some persistence is already built into the product. We don’t know how much data is kept in that profile and it might only be the identifiers needed for cross-system identity resolution. (That’s what Adobe stored persistently before it changed its approach.)  You could view this as the seed of a hybrid solution already planted within Customer 360.  But while it can probably be extended to some degree, it’s not the equivalent a CDP that is designed to store most data centrally.

My guess is that Salesforce will eventually decide, as Adobe has already, that a large central repository is necessary. Customer 360 builds connections that are needed to support such a repository, so it can be viewed as a step in that direction, whether or not that's the intent.  Since a complete solution needs both central storage and direct access, we can view the challenge as finding the right balance between the two storage models, not picking one or the other exclusively. Finding a balance isn't as much fun as a having religious war over which is absolutely correct but it's ultimately the best solutions for marketers and other users.

And what does all this mean for the independent CDP market? Like Adobe yesterday, Salesforce is describing a product in its early stages – although the Salesforce approach is technically less challenging and closer to delivery. It will appeal primarily to companies that use the three Salesforce B2C systems, which I think is relatively small subset of the business world. Exactly how non-Salesforce systems are integrated through Mulesoft isn’t yet clear: in particular, I wonder how much identity resolution will be possible.

But I still feel the access-in-place approach solves only a part of the problem addressed by CDPs and  not the most important part at that. We know from research and observation that the most common CDP use cases are analytics, not interactions: although coordinated omnichannel customer experience is everyone's ultimate goal, initial CDP projects usually focus on understanding customers and analyzing their behaviors over time. In particular, artificial intelligence relies heavily on comprehensive customer data sets that need to be assembled in a persistent data store outside of the source systems. Given the central role that AI is expected to play in the future, it’s hard to imagine marketers enthusiastically embracing a Salesforce solution that they recognize won't assemble AI training sets.  They’re more likely to invest in one solution that meets both analytical (including AI) and interaction needs. For the moment, that puts them firmly back into CDP territory (including Datorama, which Salesforce bought in August).

The big question is how long this moment lasts. Salesforce and Adobe/Microsoft/SAP will all get lots of feedback from customers once they deploy their solutions. We can expect them to be fast learners and pragmatic enough to extend their architectures in whatever ways are needed to meet customer requirements. The threat of those vendors deploying truly competitive products has always hung over the CDP industry and is now more menacing than ever.  There may even be some damage before those vendors deploy effective solutions, if they scare off investors and confuse buyers or just cause buyers to defer their decisions.  CDP vendors and industry analysts, who are already struggling to help buyers understand the nuances of CDP features, will have an even harder job to explain the strengths and weaknesses of these new alternatives. But the biggest job belongs to the buyers themselves: they're the ones who will most suffer if they pick products that don't truly meet their needs.

Monday, September 24, 2018

Adobe, Microsoft and SAP Announce Open Data Initiative: It's CDP Turf But No Immediate Threat

One of the more jarring aspects in Adobe’s briefing last week about its Marketo acquisition* were several statements that suggested Marketo and Adobe’s other products were going to access shared customer data. This would be the Experience Cloud Profile announced  in March and based on an open source data model developed jointly with Microsoft and stored on Microsoft Azure.**  When I tried to reconcile Adobe’s statements with reality, the best I could come up with was they were saying that Adobe systems and Marketo would push their data into the Experience Cloud Profiles and then synchronize whatever bits they found useful with each application’s data store. That’s not the same as replacing the separate data stores with direct access to the shared Azure files but it is sharing of a sort. Whether even that level of integration is available today is unclear but if we required every software vendor to only describe capabilities that are actually in place, the silence would be deafening.

The reason the shared Microsoft project was on Adobe managers’ minds became clear today when Adobe, Microsoft and SAP announced an “Open Data Initiative” that seemed pretty much the same news as before – open source data models (for customers and other objects) feeding a system hosted on Azure. The only thing really seemed new was SAP’s involvement. And, as became clear during analyst questions after the announcement at Microsoft’s Ignite conference, this is all in very early stages of planning.

I’ll admit to some pleasure that these firms have finally admitted the need for unified customer data, a topic close to my heart. Their approach – creating a persistent, standardized repository – is very much the one I’ve been advocating under the Customer Data Platform label. I’ll also admit to some initial fear that a solution from these vendors will reduce the need for stand-alone CDP systems. After all, stand-alone CDP vendors exist because enterprise software companies including Microsoft, Adobe and SAP have left a major need unfilled.

But in reviewing the published materials and listening to the vendors, it’s clear that their project is in very early stages. What they said on the analyst call is that engineering teams have just started to work on reconciling their separate data models – which is heart of the matter. They didn’t put a time frame on the task but I suspect we’re talking more than a year to get anything even remotely complete. Nor, although the vendors indicated this is a high strategic priority, would I be surprised if they eventually fail to produce something workable.  That could mean they produce something, but it’s so complicated and exception-riddled that it doesn’t meet the fundamental goal of creating truly standardized data.

Why I think this could happen is that enterprise-level customer data is very complicated.  Each of these vendors has multiple systems with data models that are highly tuned to specific purposes and are still typically customized or supplemented with custom objects during implementation. It’s easy to decide there’s an entity called “customer” but hard to agree on one definition that will apply across all channels and back-office processes. In practice, different systems have different definitions that suit their particular needs.

Reconciling these is the main challenge in any data integration project.  Within a single company, the solution involves detailed, technical discussions among the managers of different systems. Trying to find a general solution that applies across hundreds of enterprises may well be impossible. In practice, you’re likely to end up with data models that support different definitions in different circumstances with some mechanism to specify which definition is being used in each situation. That may be so confusing that it defeats the purpose of having shared data, which is for different people to easily make use of it.

Note that CDPs are deployed at the company level, so they don’t need to solve the multi-company problem.*** This is one reason I suspect the Adobe/Microsoft/SAP project doesn’t pose much of a threat to the current CDP vendors, at least so long as buyers actually look at the details rather than just assuming the big companies have solved the problem because they’ve announced they're working on it.

The other interesting aspect of the joint announcement was its IT- rather than marketing-centric focus. All three of the supporting quotes in the press release came from CIOs, which tells you who the vendors see as partners. Nothing wrong with that: one of trends I see in the CDP market is a separation between CDPs that focus primarily on data management (and enterprise-wide use cases and IT departments as primary users) and those that incorporate marketing applications (and marketing use cases and marketers as users). As you may recall, we recently changed the CDP Institute definition of CDP from “marketer-controlled” to “packaged software” to reflect the use of customer data across the enterprise. But most growth in the CDP industry is coming from the marketing-oriented systems. The Open Data Initiative may eventually make life harder for the enterprise-oriented CDPs, although I’m sure they would argue it will help by bringing attention to a problem that it doesn’t really solve, opening the way to sales of their products.  It’s even less likely to impact sales of the marketing-oriented CDPs, which are bought by marketing departments who want tightly integrated marketing applications.

Another indication of the mindset underlying the Open Data Initiative is this more detailed discussion of their approach, from Adobe’s VP of Platform Engineering. Here the discussion is mostly about making the data available for analysis. The exact quote “to give data scientists the speed and flexibility they need to deliver personalized experiences” will annoy marketers everywhere, who know that data scientists are not responsible for experience design, let alone delivery. Although the same post does mention supporting real-time customer experiences, it’s pretty clear from context that the core data repository is a data lake to be used for analysis, not a database to be accessed directly during real-time interactions. Again, nothing wrong with that and not all CDPs are designed for real-time interactions, either. But many are and the capability is essential for many marketing use cases.

In sum: today’s announcement is important as a sign that enterprise software vendors are (finally) recognizing that their clients need unified customer data. But it’s early days for the initiative, which may not deliver on its promises and may not promise what marketers actually want or need. It will no doubt add more confusion to an already confused customer data management landscape. But smart marketers and IT departments will emerge from the confusion with a sound understanding of their requirements and systems that meet them. So it's clearly a step in the right direction.

*I didn't bother to comment the Marketo acquisition in detail because, let’s face it, the world didn’t need one more analysis. But now that I’ve had a few days to reflect, I really think it was a bad idea. Not because Marketo is a bad product or it doesn’t fill a big gap in the Adobe product line (B2B marketing automation).  It's because filling that gap won’t do Adobe much good. Their creative and Web analysis products already gave them a presence in every marketing department worth considering, so Marketo won’t open many new doors. And without a CRM product to sell against Salesforce, Adobe still won’t be able to position itself as a Salesforce replacement. So all they bought for $4.75 billion was the privilege of selling a marginally profitable product to their existing customers. Still worse, that product is in a highly competitive space where growth has slowed and the old marketing automation approach (complex, segment-based multi-step campaign flows) may soon be obsolete. If Adobe thinks they’ll use Marketo to penetrate small and mid-size accounts, they are ignoring how price-sensitive, quality-insensitive, support-intensive, and change-resistant those buyers are. And if they think they’ll sell a lot of add-on products to Marketo customers, I’d love to know what those would be.

** I wish Microsoft would just buy Adobe already. They’re like a couple that’s been together for years and had kids but refuses to get married.

*** Being packaged software, CDPs let users implement solutions via configuration rather than custom development. This is why they’re more efficient than custom-built data warehouses or data lakes for company-level projects.

Thursday, September 06, 2018

Customer Data Platforms vs Master Data Management: How They Differ

My wanderings through the Customer Data Platform landscape have increasingly led towards the adjacent realm of Master Data Management (MDM). Many people are starting to ask whether they’re really the same thing or could at least be used for some of the same purposes.

Master Data Management can be loosely defined as maintaining and distributing consistent information about core business entities such as people, products, and locations. (Start here
if you’d like to explore more formal definitions.) Since customers are one of the most important core entities, it clearly overlaps with CDP.

Specifically, MDM and CDP both require identity resolution (linking all identifiers that apply to a particular individual), which enables CDPs to bring together customer data into a comprehensive unified profile. In fact some CDPs rely on MDM products to perform this function.

MDM and (some) CDP systems also create a “golden record” containing the business’s best guess at customer attributes such as name and address. That’s the “master” part of MDM.  It often requires choosing between conflicting information captured by different systems or by the same system at different times. CDP and MDM both share that golden record with other systems to ensure consistency.

So how do CDP and MDM differ? The obvious answer is that CDP manages a lot more than just master data: it captures all the details of transactions and behaviors that are not themselves customer attributes. But many MDM products are components of larger data integration suites from IBM, SAP, Oracle, SAS, Informatica, Talend and others. These also manage more than the identifying attributes of core data objects. You could argue that this is a bit of a bait-and-switch: the CDP-like features in these suites are not part of their MDM products. But it does mean that the vendors may be able to meet CDP data requirements, even if you need more than their MDM module to do it.

Another likely differentiator is that MDM systems run on SQL databases and work with structured data. This is the best way to manage standardized entity attributes.  By contrast, CDPs work with structured, semi-structured and unstructured data which requires a NoSQL file system like Hadoop. But, again, the larger integration suites often support semi-structured and unstructured data and NoSQL databases.  So the boundary remains blurry.

On the practical level, MDMs are primarily tools that IT departments buy to improve data quality and consistency.  Business user interfaces are typically limited to specialized data governance and workflow functions. CDPs are designed to be managed by business users although deploying them does take some technical work. Marketing departments are the main CDP buyers and users while MDM is clearly owned by IT.

One CDP vendor recently told me the main distinction they saw was that MDM takes a very rigid approach to identity data, creating a master ID that all connected systems are required to use as the primary customer ID. He contrasted this with the CDP approach that lets each application work with its own IDs and only unifies the data within the CDP itself.  He also argued that some CDPs (including his, of course) let users apply different matching rules for different purposes, applying more stringent matches in some cases and looser matches in others. I’m not sure that all MDM systems are really this rigid.  But it’s something to explore if you’re assessing how an MDM might work in your environment.

Going back to practical differences, most CDPs have standard connectors for common marketing data sources, analysis tools, and execution systems. Those connectors are tuned to ingest complete data streams, not the handful of entity attributes needed for master data management.  There are certainly exceptions to this among CDPs: indeed, CDPs that focus on analytics and personalization are frequently used in combination with other CDPs that specialize in data collection. MDM vendors are less marketing-centric than CDPs so you’ll find fewer marketing-specific connectors and data models. Similarly, most MDMs are not designed to store, expose, reformat, or deliver complete data sets. But, again, MDMs are often part of larger integration suites that do offer these capabilities.

So, where does this leave a weary explorer of the CDP jungle? On one hand, MDM in itself is very different from CDP: it provides identity resolution and shares standard (“golden”) customer attributes, but doesn’t ingest, store or expose full details for all data types.  On the other hand, many MDM products are part of larger suites that do have these capabilities.

The real differentiator is focus: CDPs are built exclusively for customer data, are packaged software built for business users (mostly marketers), and have standard connectors for customer-related systems. MDM is a general-purpose tool designed as a component in systems built and run by IT departments.

Those differences won’t necessarily show up on paper, where both types of systems will check the boxes on most capabilities lists. But they’ll be clear enough as you work through the details of use cases and deployment plans.

As any good explorer will tell you, there’s no substitute for seeing the ground on foot.

Monday, September 03, 2018

Third Party Data Is Not Dead Yet

Third party data is not dead yet.

It was supposed to be. The culprit was to be the EU’s General Data Protection Regulation, which would cut off the flow of personal data to third party brokers and, even more devastatingly, prevent marketers from buying third party data for fear it wasn’t legitimately sourced. 

The expectations are real.  A recent Sizmek study found that 77% of marketers predicted data regulations such as GDPR would make targeting audiences with third party data increasingly difficult.  In a Demandbase study, 60% of respondents said that GDPR was forcing a change in their global privacy approach.  And 44% of marketers told Trusted Media Brands  that they expected GDPR would lead to more use of first party data vs. cookies.

Marketers say they're acting on these concerns by cutting back on use of third party data. Duke Fuqua’s most recent CMO Survey found that use of online (first party) customer data has grown at 63% of companies in the past two years while just 31% expanded use of third party data.  Seventy percent expected to further grow first party data in the next two years compared with just 31% for third party data.  A Dentsu Ageis survey had similar results: 57% of CMOs were expanding use of existing (first party) data compared with 37% expanding use of purchased data.

The irony is that reports of GDPR impact seem to have been greatly exaggerated. A Reuters Institute study found 22% fewer third party cookies on European news sites after GDPR deployment, a significant drop which nevertheless means that 78% remain.  Meanwhile, Quantcast reported that clients using its consent manager achieved a consent rate above 90%.  In other words, third party data is still flowing freely even in Europe even if the volume is down a little. The flow is even freer in the U.S., where developments like the new California privacy regulation will almost surely be watered down before taking effect, if not blocked entirely by Federal pre-emption.

Of course, what regulation can’t achieve, self-interest could still make happen. There’s at least some debate (stoked by interested parties) over whether targeting ads with third party data is really more effective than contextual targeting, which is the latest jargon for putting ads on Web pages related to the product. Online ad agency Roast and ad platform Teads did an exhaustive study that concluded contextual targeting and demographic targeting with third party data worked about equally well. The previously-mentioned Sizmek study found that 87% of marketers plan to increase their contextual targeting in the next year and 85% say brand safety is a high or critical priority. (Ads appearing on brand-unsafe Web pages is a problem when ads are targeted at individuals, a primary use for third party data.)  The Trusted Media Brands study also listed brand safety as a major concern about digital media buying (ranked third and cited by 58%) although, tellingly, ROI and viewability were higher (first and second at 62% and 59%, respectively).

But third party data isn’t going away.

It’s become increasingly central for business marketers as Account Based Marketing puts a premium on understanding potential buyers whether or not they're already in the company’s own database.  Third party data also includes intent information based on behaviors beyond the company’s own Web site. Indeed companies including Lattice Engines, Radius, 6Sense and Demandbase have all shifted much of their positioning away from predictive modeling or ad targeting based on internal data and towards the value of the data they bring.

Then again, business marketing always relied heavily on third party data. What arguably more surprising is that consumer marketers also seems to be using it more.  Remember that the CMO surveys cited earlier showed expectations for slower growth, not actual declines.  There's more evidence in the steady stream of vendor announcements touting third party data applications.

Many of these announcements are from established vendors selling established applications, such as ad targeting and marketing performance measurements. For targeting, see recent announcements from TruSignal, Thunder, and AdTheorent; for attribution, see news from Viant and  IRI.

But what's most interesting are the newer applications. These go beyond lists of target customers or comparing anonymized online and offline data. They provide something that only third party data can do at scale: connect online and offline identities. This is something that companies like LiveRamp and Neustar have done for years.  But we're now seeing many interesting new players:

Bridg helps retailers to identify previously anonymous in-store customers, based on probabilistic matching against their proprietary consumer database.  It then executes tailored online marketing campaigns.

SheerID verifies the identities of online visitors, enabling marketers to safely limit offers to members of specific groups such as teachers, students, or military veterans. They do this by building connections to reference databases holding identity details..

PebblePost links previously anonymous Web visitors to postal addresses, using yet another proprietary database to make the connections. They use this to target direct mail based on Web behaviors.

You’ll have noticed that the common denominator here is a unique consumer database.  These do something not available from other third party sources or not available with the same coverage.  Products like these will keep marketers coming back for third party data whether or not privacy regulations make Web-based data gathering more difficult.  So don't cry for third party data: the truth is it never has left you.

Tuesday, August 21, 2018

BadTech Is the Next New Thing

Forget about Martech, Adtech, or even Madtech. The next big thing is BadTech.

I’m referring to is the backlash against big tech firms – Google, Amazon, Apple, and above all Facebook – that have relentlessly expanded their influence on everyday life. Until recently, these firms were mostly seen as positive, or at least benignly neutral, forces that made consumers’ lives easier.  But something snapped after the Cambridge Analytica scandal last March.  Scattered concerns became a flood of hostility.  Enthusiasm curdled into skepticism and fear.  The world recognized a new avatar of evil: BadTech.

As a long-standing skeptic (see this from 2016), I’m generally pleased with this development. The past month alone offers plenty of news to alarm consumers:
There's more bad news for marketers and other business people:
Not surprisingly, consumers, businesses, and governments have reacted with new skepticism, concern, and even some action:
But all is not perfect.
  • BadTech firms still plunge ahead with dangerous projects. For example, despite the clear and increasing dangers from poorly controlled AI, it’s being distributed more broadly by Ebay, Salesforce, Google, and Oracle
  • Other institutions merrily pursue their own questionable ideas. Here we have General Motors and Shell opening new risks by connecting cars to gas pumps.  Here – this is not a joke – a university is putting school-controlled Amazon Echo listening devices in every dorm room
  • The press continues to get it wrong. This New York Times Magazine piece presents California’s privacy law as a triumph for its citizen-activist sponsor, when he in fact traded a nearly-impossible-to change referendum for a law that will surely be gutted before it takes effect in 2020.
  • Proponents will overreach. This opinion piece argues the term “privacy policy” should be banned because consumers think the label means a company keeps their data private. This is a side issue at best; at worst, it tries to protect people from being lazy. Balancing privacy against other legitimate concerns will be hard enough without silly distractions.
So welcome to our latest brave new world, where BadTech is one more villain to fear   It's progress that people recognize the issues but we can't let emotion overwhelm considered solutions.  Let’s use the moment to address the real problems without creating new ones or throwing away what’s genuinely good.  We can't afford to fail.

Saturday, August 18, 2018

CDP Myths vs Realities

A few weeks ago, I critiqued several articles that attacked “myths” about Customer Data Platforms. But, on reflection, those authors had it right: it’s important to address misunderstandings that have grown as the category gains exposure. So here's my own list of CDP myths and realities. 

Myth: CDPs are all the same.
Reality: CDPs vary widely. In fact, most observers recognize this variation and quite a few consider it a failing. So perhaps the real myth is that CDPs should be the same. It’s true that the variation causes confusion and means buyers must work hard to ensure they purchase a system that fits their needs. But buyers need to match systems to their needs in every category, including those where features are mostly similar.

Myth: CDPs have no shared features.
Reality: This is the opposite of the previous myth but grows from the same underlying complaint about CDP variation. It’s also false: CDPs all do share core characteristics. They’re packaged software; they ingest and retain detailed data from all sources; they combine this data into a complete view of each customer; they update this view over time; and they expose the view to other systems. This list excludes many products from the CDP category that share some but not all of these features. But it doesn’t exclude products that share all these features and add some other ones. These additional features, such as segmentation, data analysis, predictive models, and message selection, account for most of the variation among CDP systems. Complaining that these mean CDPs are not a coherent category is like complaining that automobiles are not a category because they have different engine types, body styles, driving performance, and seating capacities. Those differences make them suitable for different purposes but they still share the same core features that distinguish a car from a truck, tractor, or airplane.

Myth: CDP is a new technology.
Reality: CDPs use modern technologies, such as NoSQL databases and API connectors. But so do other systems. What’s different about CDP is that it combines those technologies in prebuilt systems, rather than requiring technical experts to assemble them from scratch. Having packaged software to build a unified, sharable customer database is precisely the change that led to naming CDP as a distinct category in 2013.

Myth: CDPs don’t need IT support.
Reality: They sure do, but not as much. At a minimum, CDPs need corporate IT to provide access to corporate systems to acquire data and to read the CDP database. In practice, corporate IT is also often involved in managing the CDP itself. (This recent Relevancy Group study put corporate IT participation at 49%.)   But the packaged nature of CDPs means they take less technical effort to maintain than custom systems and many CDPs provide interfaces that empower business users to do more for themselves. Some CDP vendors have set their goal as complete business user self-service but I haven’t seen anyone deliver on this and suspect they never will.

Myth: CDPs are for marketing only.
Reality: It’s clear that departments outside of marketing can benefit from unified customer data and there’s nothing inherent in CDP technology that limits them to marketing applications. But it’s also true that most CDPs so far have been purchased by marketers and have been connected primarily to marketing systems. The optional features mentioned previously – segmentation, analytics, message selection, etc. – are often marketing-specific. But CDPs with those features must still be able to share their data outside of marketing or they wouldn’t be CDPs.

Myth: CDPs manage only first party, identified data.
Reality: First party, identified data is the primary type of information stored in a CDP and it’s something that other systems (notably Data Management Platforms) often handle poorly or not at all. But nothing prevents a CDP from storing third party and/or anonymous data, and some CDPs certainly do.  Indeed, CDPs commonly store anonymous first party data, such as Web site visitor profiles, which will later be converted into identified data when a customer reveals herself. The kernel of truth inside this myth is that few companies would use a CDP to store anonymous, third party data by itself.

Myth: Identity resolution is a core CDP capability.
Reality: Many CDP systems provide built-in identity resolution (i.e., ability to link different identifiers that relate to the same person).  But many others do not.  This is by far the most counter-intuitive CDP reality, since it seems obvious that a system which builds a unified customer profiles should be able to connect data from different sources.  But quite a few CDP buyers don’t need this feature, either because they get data from a single source system (e.g., ecommerce or publishing), because their company has existing systems to assemble identities (common in financial services), or because they rely on external matching systems (frequent in retail and business marketing). What nearly all CDPs do have is the ability to retain links over time, so unified profiles can be stitched together as new identifiers are connected to each customer’s master ID. One way to think about this is: the function of identity resolution is essential for building a unified customer database, but the feature may be part of a CDP or something else.

Myth: CDPs are not needed if there’s an Enterprise Data Warehouse.
Reality: It’s a reasonable simplification to describe a CDP as packaged software that builds a customer-centric Data Warehouse. But a Data Warehouse is almost always limited to highly structured data stored in a relational database.  CDPs typically include large amounts of semi-structured and unstructured data in a NoSQL data store. Relational technology means changing a Data Warehouse is usually a complex, time-consuming project requiring advanced technical skill. Pushing data into a CDP is much easier, although some additional work may later be required to make it usable. Even companies with an existing Data Warehouse often find a CDP offers new capabilities, flexibility, and lower operating costs that make it a worthwhile investment.

Myth: CDPs replace application data stores.
Reality: Mea culpa: I’ve often explained CDPs by showing separate silo databases replaced by a single shared CDP.  But that’s an oversimplification to get across the concept. There are a handful of situations where a delivery system will read CDP data directly, such as injecting CDP-selected messages into a Web page or exposing customer profile details to a call center agent. But in most cases the CDP will synchronize its data with the delivery system’s existing database. This is inevitable: the delivery systems are tightly integrated products with databases optimized for their purpose. The value of the CDP comes from feeding better data into the delivery system database, not from replacing it altogether.

Myth: CDP value depends on connecting all systems.
Reality: CDPs can deliver great value if they connect just some systems, or sometimes even if they only expose data from a single system that was otherwise inaccessible.  This matters because connecting all of a company's systems can be a huge project or even impossible if some systems are not built to integrate with others.  This shouldn't be used as an argument against CDP deployment so long as a less comprehensive implementation will still provide real value.

Myth: The purpose of CDP is to coordinate customer experience across all channels.
Reality: That's one goal and perhaps the ultimate.  But there are many other, simpler applications a CDP makes possible, such as better analytics and more accurate data shared with delivery systems.   In practice, most CDP users will start with these simpler applications and add the more demanding ones over time.

Myth: The CDP is a silver bullet that solves all customer data problems.
Reality: There are plenty of problems beyond the CDP's control, such as the quality of input data and limits on execution systems.  Moreover, the CDP is just a technology and many obstacles are organizational and procedural, such as cooperation between departments, staff skills, regulatory constraints, and reward systems.  What a CDP will do is expose some obstacles that were formerly hidden by the technical difficulty of attempting the tasks they obstruct.  Identifying the problems isn't a solution but it's a first step towards finding one.

Of course, everyone knows there are no silver bullets but there's always that tiny spark of hope that one will appear.  I hesitate to quench that spark because it's one of the reasons people try new things, CDPs included.  But I think the idea of CDPs is now well enough established for marketers to absorb a more nuanced view of how they work without losing sight of their fundamental value.  Gradual deflation of expectations is preferable to a sudden collapse.  Let's hope a more realistic understanding of CDPs will ultimately lead to better results for everyone involved.

Thursday, August 02, 2018

Arm Ltd. Buys Treasure Data CDP

Customer Data Platform vendor Treasure Data today confirmed earlier reports that it is being purchased by Arm Limited, which licenses semi-conductor technologies and is itself a subsidiary of the giant tech holding company SoftBank. The price was not announced but was said to be around $600 million.

The deal was the second big purchase of a Customer Data Platform vendor in a month, following the Salesforce’s Datorama acquisition. Arm seems a less likely CDP buyer than Salesforce but made clear their goal is to use Treasure Data o manage Internet of Things data. That’s an excellent fit for Treasure Data’s technology, which is very good at handling large volumes of semi-structured data. Treasure Data will operate as a separate business under its existing management and will continue to sell its product to marketers as a conventional Customer Data Platform.

While Arm is an unexpected CDP buyer, the deal does illustrate some larger trends in the CDP market. One is the broadening of CDP beyond pure marketing use cases: as critics have noted, unified customer data has applications throughout an organization so it doesn’t make sense to limit CDP to marketing users. In fact, the time has probably come to remove “marketer-managed” from the formal definition of CDP.  But that’s a topic for another blog post.

A complementary trend is use of CDP technology for non-customer data. Internet of Things is obviously of growing importance and, although you might argue thats IoT data is really just another type of customer data, there’s a reasonable case that the sheer volume and complexity of IoT data rightly justifies considering it a category of its own. More broadly, there are other kinds of data, such as product and location information, which also should be considered in their own terms.

What’s really going on here is that one category of CDPs – the systems that focus primarily on data management, as opposed to marketing applications – is merging with general enterprise data management systems. These are companies like Qubole and Trifacta that often use AI to simplify the process of assembling enterprise data.  These systems do for all sorts of information what a CDP does for customer information. This is a new source of competition for CDPs, especially as corporate IT departments get more involved. There are also a handful of CDP systems, including ActionIQ, Aginity, Amperity, and Reltio, that have the potential to expand beyond customer information. It’s possible that those vendors will eventually exit the CDP category altogether, leaving the field to CDPs that provide marketing-specific functions for analysis and customer engagement. (If that happens, then “marketer-managed” should stay in the definition.)

In any case, the Treasure Data acquisition is another milestone in the evolution of the CDP industry, illustrating that at least some of the systems have unique technology that is worth buying at a premium. I can imagine some of the other data-oriented vendors being purchased for similar reasons. I can also imagine acquisition of companies like Segment and Tealium that have particularly strong collections of connectors to source and target systems. That’s another type of asset that’s hard to replicate.

So we'll see how the industry evolves.  Don't be surprised if it follows several paths simultaneously: some buyers may take an enterprise-wide approach while others limit CDP use to marketing. What I don't yet see is any type of consolidation around a handful of winners who gobble up most of the market share.  That might still happen but, for now, the industry will remain vibrant and varied, as different vendors try different configurations to see which most marketers find appealing.

Wednesday, August 01, 2018

Salesforce Buys Datorama Customer Data Platform: It's Complicated

News that Salesforce had purchased Datorama crossed the wire just as I was starting on two weeks of travel, so I haven’t been able to comment until now. This was purchase was noteworthy as the first big CDP acquisition by a marketing cloud vendor. That the buyer was Salesforce was even more intriguing, given that they had purchased Mulesoft in March for $6.5 billion and that Marketing Cloud CEO Bob Stutz (who announced the Datorama deal) had called CDPs “a passing fad” and said Salesforce already had “all the pieces of a CDP” in an interview in June.

The Salesforce announcement didn’t refer to Datorama as a CDP and Datorama itself doesn’t use the term either. They do meet the requirements – packaged software building a unified, persistent customer database that’s open to other systems – but are definitely an outlier. In particular, Datorama ingests all types of marketing-related data, notably including ad campaign- and segment-level performance information as well as customer-level detail. Their stated positioning as “one centralized platform for all your marketing data and decision making” sure sounds like a CDP, but their focus has been on marketing performance, analytics, and data visualization. Before the acquisition, they told me some of their clients ingest customer-level detail but most do not. So it would appear that while Salesforce’s acquisition reflects recognition of the need for a persistent, unified marketing database (something they didn’t get with MuleSoft), they didn’t buy Datorama as a way to build a Single Customer View.

Datorama’s closest competitors are marketing analysis tools like Origami Logic and Beckon. I’ve never considered either of those CDPs because they clearly do not work with customer-level detail. Datorama competes to a lesser extent with generic business intelligence systems like Looker, Domo, Tableau, and Qlik. These traditionally have limited data integration capabilities although both Qlik and Tableau have recently purchased database building products (Podium Data and Empirical Systems, respectively), suggesting a mini-trend of its own. It’s worth noting that one of Datorama’s particular strengths is use of AI to simplify integration of new data sources. The firm’s more recent announcements have touted use of AI to find opportunities for new marketing programs.

Datorama is much larger than most other CDP vendors: it ranked third (behind Tealium and IgnitionOne) in the CDP Institute’s most recent industry report, based on number of employees found in LinkedIn. The company doesn’t release revenue figures but, assuming the 360 employees currently shown on LinkedIn generate $150,000 each, it would have a run rate of $54 million. (This is a crude guess: actual figure could easily be anywhere from $30 million to $80 million.) Sticking with the $54 million figure, the $800 million purchase price is 15x revenue, which is about what such companies cost. (Mulesoft went for 22x revenue.)  The company reports 3,000 clients, which again is a lot for a CDP but gives an average of under $20,000 per client. That’s very low for an enterprise CDP.  It reflects the fact that most of Datorama’s clients use it to analyze aggregated marketing data, not to manage customer-level details.

Seeing Datorama as more of an marketing analysis system than CDP makes it a little easier to understand why Salesforce continues to work with other CDP vendors. The Datorama announcement was followed a week later by news that Salesforce Ventures had led a $23.8 million investment in the SessionM CDP, which had announced an expanded Salesforce integration just one month earlier  SessionM builds its own database but its main strength is real-time personalization and loyalty. Salesforce in June also introduced Marketing Cloud Interaction Studio, a licensed version of the Thunderhead journey orchestration engine. Thunderhead also builds its own database and I consider it a CDP although they avoid the term, reflecting their primary focus on journey mapping and orchestration. The Salesforce announcement states explicitly that the Interaction Studio will shuffle customers between campaigns defined in the Marketing Cloud’s own Journey Builder, clarifying what any astute observer already knew: that Journey Builder is really about campaign flows, not true journey management.

So, how do all these pieces fit with each other and the rest of Salesforce Marketing Cloud? It’s possible that Salesforce will let Datorama, SessionM, and Interaction Studio independently build their own isolated databases but the disadvantages of that are obvious. It’s more likely that Salesforce will continue to argue that ExactTarget should be the central customer database, something that’s been their position so far even though every ExactTarget user I’ve ever spoken with has said it doesn’t work. The best possible outcome might be for Salesforce to use Datorama as its true CDP when a client wants one, and have it feed data into SessionM, Interaction Studio, ExactTarget, and other Marketing Cloud components as needed.  We'll see if that happens: it could evolve that way even if Salesforce doesn't intend it at the start.

Looking at this from another perspective: the combination of Datorama, SessionM, and Interaction Studio (Thunderhead) almost exactly fills every box in my standard diagram of CDP functions, which distinguishes the core data processing capabilities (ingest, process, expose) from optional analytics and engagement features.  Other Marketing Cloud components provide the Delivery capabilities that sit outside of the CDP, either directly (email and DMP) or through integrations.  The glaring gap is identity linkage, which Datorama didn't do the last time I checked.  But that's actually missing in many CDPs and often provided by third party systems.  Still, you shouldn't be too surprised to see Salesforce make another acquisition to plug that hole.  If you're wondering where Mulesoft fits, it may play a role in some of the data aggregation, indexing, reformatting, and exposing steps; I'm not clear how much of that is available in Datorama.  But Mulesoft also has functions outside of this structure.

In short, it's quite true that Salesforce has all the components of a CDP, especially when you include  Datorama in the mix.

The idea of stringing these systems together raises a general point that extends beyond Salesforce.  The reality is that almost every marketing system must import data into its own database, rather than connecting to a shared central data store. I’ll admit I’ve often drawn the picture as if there would be a direct connection between the CDP database and other applications.  This should never have been taken literally. There are indeed some situations where the CDP data is read directly, such as real time access to data about a single customer. But even those configurations usually require the CDP data to be indexed or extracted into a secondary format: absent special technology, you don’t do that sort of query directly against the primary “big data” store used by most CDPs.

Outside of those exceptions, a subset of CDP data will usually be loaded into the primary data store of the customer-facing applications (email, DMP, Web personalization, etc.). Realistically, those data stores are optimized for their own application and the applications read them directly.  There’s no practical way the applications can work without them.

This is a nuance that was rightly avoided in the early days of CDP as we struggled to explain the concept. But I think now that CDP is well enough understood that we can safely add some details to the picture to make it more realistic and avoid creating false expectations. I'll try to do that in the futre.

Friday, July 27, 2018

Get Ready for CDP Horror Stories as Customer Data Platforms Enter the Trough of Dillusionment

It’s nearly a year since Gartner placed Customer Data Platforms at the top of its “hype cycle” for digital marketing technologies. The hype cycle shouldn’t be taken too literally but it does capture the growing interest in CDPs and reminds us to expect this attention to attract critics.

Sure enough, we’ve recently started to see headlines like “Customer Data Platforms: A Contrarian’s View”, “Why Your Customer Data Platform Is a Failure”  and “CDPs: Yet Another Acronym That Lets Marketers Down”.  It's tempting to dismiss such headlines as competitive attacks or mere attempts to piggyback on wide interest in CDPs.   But we should still take a look at the underlying arguments.   After all, we might learn something.

Let’s start with the “Contrarian’s View”, written by Lisa Loftis, a customer data industry veteran who current works for SAS. She offers to debunk two common CDP “myths”: that “CDPs solve a problem unique to marketing” and that “'marketing-managed' means you don’t need IT’s help”.

Regarding the first myth, Loftis says that systems to match customer identities have been available for decades and that departments outside of marketing also need unified data. Regarding the second, she states its best for marketing and IT departments to work together given the complex technical challenges of marketing systems in general and customer data matching in particular.

She’s right.

That is, she’s right that these technologies are not new, that unified data is useful outside of marketing, and that deploying CDPs requires some technical skills. So far a I know, though, she's wrong to suggest that CDP vendors and advocates (obviously including me) claim otherwise. False belief in these myths are not the reasons marketers buy CDPs.

To put it bluntly, the problem that CDP solves isn’t the lack of technology to build unified customer databases: it’s that corporate IT departments haven’t used that technology to meet marketers’ needs. That failure has created a business opportunity that CDPs have filled. It’s the same reason that people hire private security guards when the government's police fail to maintain order.

And, just as good security guards cooperate with the police, CDP systems must integrate with corporate systems and CDP vendors must work with corporate IT.  CDP vendors have designed their systems to be easier to use than traditional customer matching and management technologies, but that only reduces the technical effort without eliminating it. The remaining technical work may be done by the CDP vendor itself, by a service provider, or even by the corporate IT group. The term “marketer-driven” in the CDP Institute’s formal CDP definition is intended to express this: marketers in control of the CDP, which isn’t the same as doing the technical work.

“Why Your CDP is a Failure” offers an even more provocative headline. But hopes for juicy disaster tales are quickly dashed: author Alan J. Porter of Simple [A] only means that CDPs “fail” because customer data should be shared by all departments. Again, no CDP vendor, buyer, or analyst would ever argue otherwise. There’s no technical reason a CDP can’t be used outside of marketing and some CDP vendors explicitly position their product as an enterprise system. The reason that CDPs are not used outside of marketing is that companies fail to fund enterprise-wide customer databases, not that CDPs can’t deliver such databases. Your CDP is a failure for this reason only if building such a database was its goal. That’s rarely the case.

“CDPs: Yet Another Acronym That Lets Marketers Down” starts with the airy assertion that “When you strip all the nonsensical nuances away from these companies -- the CRMs, the TMSs [tag management systems], the DMPs, the CDPs -- they’re all one simple thing at their cores: identity companies.”  This will be news to people who use such systems every day to run call centers, manage sales forces, capture Web site, run advertising campaigns, and assemble detailed customer histories.

The article continues qirh assertions that “identity isn’t everything”, “brands don’t have a complete understanding of their customers”, and “behaviors without motivations teach us nothing."  Few would argue with the first two while the third is surely overstated.  But the relevance of CDP to all three is questionable.  It seems that author Andy Hunn’s main message is that marketers need the combination of anonymized third party data and survey panel results offered by his own company, Resonate. This may be, but Resonate clearly serves a different purpose from CDPs.  So there's little reason to measure one in terms of the other.

Let me be clear: CDPs are not perfect. Like many new technologies, they are often expected to deliver more than is possible.   We are surely entering the “disillusion” stage of the hype cycle when tales of failed implementations and studies showing mixed satisfaction levels are common (and prove nothing about the technology's ultimate value).  Critical articles can be helpful in clarifying what CDPs do and don’t offer.  It's easy to lose sight of those boundaries in the early stages of a product category, when the main task is building a clear picture of the problems it solves, not on establishing its limits.

This is why the most productive discussion around CDPs right now revolves around use cases. Marketers (and other departments) need concrete examples of how CDPs are being used.  In particular, they need to be told what applications typically become possible when a CDP is added to a company’s marketing technology stack. These generally do one or more thing: combine data from multiple sources, share that data across channels, and rely on real-time access to the assembled data. It's these applications that justify investment in a CDP.

Complaining that CDPs don’t do other things isn’t very helpful – especially if CDP vendors don’t claim they do.  Nor is it a flaw in CDPs if other solutions can achieve the same thing.  Buyers can and should consider all alternatives to solving a problem: sometimes the CDP will be best and sometimes it won’t. It takes a clear understanding of each possibility to make the right choice.   Blanket claims about the value or failures of CDP may be inevitable but they don't really advance that discussion.

Tuesday, July 03, 2018

Interpublic Group is Buying Acxiom Marketing Services for $2.3 Billion. Here's Why.

Yesterday brought news that Acxiom had agreed to sell its marketing services business to Interpublic Group, a major ad holding company, for $2.3 billion. Acxiom will retain LiveRamp and do business under that name. Acxiom had restructured itself in March into the Market Services and LiveRamp groups and announced it was looking at strategic options, so the deal wasn’t especially surprising. But it’s still a milestone in the on-going evolution of the marketing industry.

For historical perspective (and assuming Wikipedia is correct), Acxiom got its start in 1969 compiling mailing lists from public sources such as telephone directories. The company grew to do all sorts of list processing, to manage custom marketing databases, to do identity resolution and to provide data enhancements for marketing lists. Although technology was always central to Acxiom's business, it was ultimately a services organization whose chief resource was a large team of experts in databases and direct marketing. It was also a favorite target of privacy advocates in those quaint days before online data gave them something much scarier to worry about.

Acxiom bought LiveRamp in 2014 for $310 million, as a logical extension of its identity data business. Since then, LiveRamp has grown much more quickly than the rest of Acxiom, currently accounting for about one-quarter of total revenue. Interesting financial note: Acxiom stock closed today at 39.45, giving it a market cap of $2.66 billion. Extracting the $2.3 billion that Interpublic is paying for everything else, this leaves LiveRamp with an implicit value of $360 million – not much more than Acxiom paid, and even less if you add the $140 million LiveRamp paid in 2016 for identity matching firms Arbor and Circulate. That’s shockingly low and suggests either an error in my calculations (let me know if you spot one) or that the market has serious doubts about something.

But we’ll worry about LiveRamp another day. What’s interesting at the moment is Interpublic as Acxiom’s buyer. At first it seems to buck the trend of private equity firms buying martech companies: see Marketo, Integral Ad Science, Aprimo, and Pitney Bowes. But this report from Hampleton Partners gives a more comprehensive perspective: yes, private equity’s share of marketing deals doubled in 2017, but the main buyers are still big agencies and consultancies. Indeed, Interpublic competitors Denstu and JWT are among the top three acquirers in the past 30 months, along with Accenture. And bear in mind that Acxiom is really more of a services company than technology developer.  It will be right at home with an agency parent.

So, what will Interpublic do with Acxiom? Some comments I saw said their main interest is Acxiom’s data business, which compiles and sells information about individuals (remember those phone books that started it all?)  However, I disagree.  It's not that I fear privacy regulations will kill that business: I expect third party data sharing will continue.  In fact, new rules should work in Acxiom’s favor.  As a company that privacy watchdogs have barked at for decades, Acxiom is likely to thrive after less responsible providers are driven from the business and as data buyers seek sources they can trust.  Indeed, Interpublic’s own discussion of the deal (click here to download) makes several references to data sales as an incremental revenue stream.

But it seems pretty clear that Interpublic’s main interest lies elsewhere. One of the nice things about ad agencies as buyers is they’re really clear in their explanations of their purchases. Interpublic’s deck lists their strategic rationale for buying Acxiom Marketing Services as acquiring “data solutions that enable omnichannel, closed-loop marketing capabilities and power exceptional marketing experiences.” A bit further on, they define the strategic fit as gaining “world class data governance and management capabilities [which] allow us to fully support clients’ first-party data”.  They also say “data assets have intrinsic value that will grow over time”, but I read this to mean they're most interested in managing each client’s own (first party) data.

This makes total sense. When Acxiom was founded in 1969, customer data was only used by a handful of direct mail marketers who were considered something between irrelevant and sleazy by the “real” marketers at big agencies and advertisers. Today, customer data management is considered the key to success in a future where every buyer expects a personalized experience. Ad buying itself, once an art form based on obscure (and often imaginary) distinctions among audience demographics, has become a mechanical process run by programmatic bidding algorithms. Indeed, the fraud-infested, brand-unsafe online ad market is now the shadiest corner of the industry.

The change is perfectly symbolized by the Association of National Advertisers (ANA) purchasing the DMA (originally Direct Mail Marketing Association): data-driven marketing is now main stream, even though the data-driven marketers are still not in charge. (If the data marketers had really taken over, DMA would have bought ANA, not the other way around.)

This is the world where Acxiom's expertise at managing customer data is needed for Interpublic to remain at the center of its clients’ marketing programs. If Interpublic doesn’t have that expertise, other agencies and digital consultancies like Accenture and IBM will provide it and displace Interpublic as a result. It’s not a new trend but it’s one that will continue. Don’t be surprised to see other data-driven marketing services firms find similar new homes.

Wednesday, June 20, 2018

Not the CDP Daily News

The World Health Organization has just declared that video addiction is a real disease but they've missed something even more insidious: the dangers of newsletter publishing. The CDP Institute Web site has been down for two days now (hopefully it will be back up by the time you read this and test that link), which means I haven't been able to publish the Institute's daily newsletter. (Yikes -- was my authorship a secret?)  This turns out to be very stressful for me, especially since I feel obligated to write the newsletter anyway so I'm ready whenever the site reappears. Gives a whole new meaning to the term "news junkie".

But, like the gun in a Chekov play, any copy that's created is begging to be used. So I'll post yesterday and today's items here for your enjoyment and my relief.  If you don't already subscribe and like what you see, visit the Institute site (once it's running) and join.

June 19, 2018

Google Invests $550 Million in Chinese E-Commerce Merchant
Source: GlobalNewswire
Just in case you had doubts that Google is serious about competing with Amazon in retail, consider this: Google just invested $550 million in Chinese e-commerce merchant Google doesn’t do much business in China so this is about expanding in other markets and listing as a seller in Google Shopping. Google also announced several enhancements last week that help retailers display their inventory on-line and drive traffic to local stores. See this from The Street for more thoughts on the deal.

Adobe Expands Attribution Features
Source: Adobe
Adobe has expanded its attribution capabilities with Attribution IQ, an enhancement to Adobe Analytics that estimates the impact of campaigns in all channels on purchases. The offering includes ten different attribution models and lets users drill into results by customer segments, campaigns, and keywords.

IBM Computer Competes Effectively with Human Debaters
Source: CNET
I could tell you about Tru Optik’s Cross-Screen Audience Validation (CAV) service,
which draws on Tru Optik’s 75 million household database of smart TV viewers to give advertisers detailed information on audience demographics, reach and frequency by audience segment. But I doubt you care. So instead, ponder this: an IBM computer is now competing effectively with human debaters, showcasing skills like marshalling facts and choosing the most effective arguments. In other words: you’ll soon be able to argue with Alexa and lose.

June 20, 2018

RichRelevance Launches Next-Generation AI-Based Experience Personalization
Personalization vendor RichRelevance has launched its next generation of AI-based personalization tools. Key features include dynamic assembly of individual experiences, real-time performance tracking and continuous optimization. A helpful “Experience Browser” overlays the client’s Web site to display data, rules, and results for each decision in context. Marketers can set business rules to constrain the AI decisions and data scientists can draw on system data to define custom personalization strategies.

Automated Data Management: Immuta Raises $20 Million and Raises $11 Million
Compared with AI-based personalization, automated data management gets relatively little attention, at least in martech circles. But its potential for solving the data unification problem is huge. Immuta, which marshals sensitive data for machine learning projects, just raised a $20 million Series B.
And, an open source SQL database to manage feeds from machines and IoT devices, raised an $11 million Series A.  Now you know.

Mobile Phone Operators Take Baby Steps to Protect Location Data
I have a slew of other items about AI being used for cool things including seeing around corners, rendering 3D objects from photos, and delivering packages via two-legged robots (creepy!).  But let’s get back to reality with a report that several mobile operators were recently caught selling location data with little control over how it was used. The good news is that Verizon, AT&T and Sprint have shut off access to the two companies that were identified as misusing it. The bad news is, they’re still selling it to pretty much anyone else. Apple also recently changed App Store rules to limit apps publishers' access to people’s iPhone contact lists.  So maybe this is progress.