Sunday, January 03, 2021

Software Has Stopped Eating the World

This August will see the tenth anniversary of Marc Andreessen’s famous claim that software is eating the world. He may have been right at the time but things have now changed: the world is biting back.

I’m not referring to COVID-19, although it’s fitting that it took an all-too-physical virus to prove that a digital bubble of alternate facts could not permanently displace reality. Nor am I juxtaposing the SolarWinds hack with the unexpectedly secure U.S. election, which showed a simple paper trail succeed while the world’s most elite computer security experts failed.

Rather, I’m looking at the most interesting frontiers of tech innovation: self-driving vehicles, green energy, and biosciences top my list. What they have in common is interaction with the physical world. By contrast, recent years haven’t seen radical change in software development. There have certainly been improvements in software, but they’re more about architectures (cloud, micro-services) and self-service interfaces than fundamentally new applications. And while most physical-world innovations are powered by software, the importance of those innovations is that they are changing physical experiences, not that they are replacing them with software-based virtual equivalents.

Even the most important software development of all – artificial intelligence – measures much of its progress by its ability to handle physical-world tasks such as image recognition, autonomous vehicle navigation, and recognizing human emotion. Let’s face it: it’s one thing for a computer to beat you at Go, but quite another for it to beat your dance moves.  Really, what special talent is left for humans to claim as their own?

The shift is well under way in the world of marketing. One of the more surprising developments of the pandemic year was the boom in digital out-of-home advertising, which includes outdoor billboards and indoor signage. The growth seemed odd, given how much time people were forced to spend at home. But the industry marched ahead, spurred in good part by increased ability to track devices as they move through the physical world. It’s a safe bet that out-of-home ads will grow even faster once people can move about more freely.

Indeed, the industries hit hardest by the pandemic – travel and events – also show that virtual experiences are not enough. Whatever their complaints before the pandemic, almost everyone who formerly traveled for business or attended business events is now eager to return to seeing people and places in person. The amount of travel will surely be reduced but it’s now clear that some physical interaction is irreplaceable.

In a similarly ironic way, the pandemic-driven boost to ecommerce has been accompanied by a parallel lesson in the importance of physical delivery. Almost overnight, fulfillment has gone from a boring cost center to a realm of intensive competition, innovation, and even a bit of heroism. Software plays a critical role but it’s a supporting actor in a drama where the excitement is in the streets.

Still closer to home for marketers, we’ve seen a new appreciation for the importance of customer experience, specifically extending past advertising to include product, delivery, service and support. If the obsession of the past decade has been targeted advertising, the obsession of the next decade will be superior service. This ties into other trends that were already under way, including the importance of trust (earned by delivering on promises through fulfillment, not making promises in advertising) and the shift from prospecting with third party data to supporting customers with first party data. Even at the cutting edge, advertising innovation has now shifted to augmented reality, which integrates real-world experiences with advertising, and away from virtual reality, which replaces the real world entirely.

This shift has substantial implications for martech.

- The endless proliferation of martech tools may well continue, especially if the definition of “tools” stretches to include self-built applications. But the importance of tools that only interact with other software will diminish. What will grow will be tools that interact with the real world, and it’s likely those tools will be harder to find and (at least initially) take more skills to use. It’s the difference between building a flight simulator game and an actual aircraft. The stakes are higher when real-world objects are involved and there’s an irreducible level of complexity needed to make things work right.

- As with all technology shifts, the leaders in the old world – the big software companies and audience aggregators like Facebook and Google – won’t necessarily lead in the new world. Reawakened anti-trust enforcement comes at exactly the worst moment for big tech companies needing to pivot. So we can expect more change in the industry landscape than we’ve seen in the past decade.

- New skills will be needed, both to manage martech and to do the marketing itself. The new martech skills will involve learning about new technologies and tighter integration with non-marketing systems, although fundamentals of system selection and management will be largely the same. The marketing skill shift may be more profound, as marketers must master entirely new modes of interaction. But, again, the marketer’s fundamental tasks – to understand customer motivations and build programs that satisfy them – will remain what they always were.

It’s been said that people overestimate short-term change and underestimate long-term change.  The shift from software to physical innovation won’t happen overnight and will never be total. But the pendulum has reversed direction and the world is now starting to eat software. Keep an eye out for that future.

Sunday, December 13, 2020

MarTech Plot Lines for 2021


“Apophenia” – seeing patterns where none exist – is both occupational hazard and job requirement for an industry analyst. The CDP Institute Daily Newsletter provides a steady supply of grist for my pattern detection mill. But the selection of items for that newsletter isn’t random. I have a list of long-running stories that I follow, and keep an eye out for items that illuminate them. I’ll share some of those below.

Feel free to play along at home and let me know what stories you see developing. Deep State conspiracy theories are out of bounds but you’re welcome to speculate on the actual author(s) of the works attributed to “Scott Brinker”. 

Media

Everyone knows the pandemic accelerated the shift towards online media that was already under way. A few points that haven’t been made quite so often include:

- connected TVs and other devices allow individual-level targeting without use of third-party cookies. As online advertising is increasingly delivered through those channels,  the death of cookies becomes less important. Nearly all device-level targeting can also include location data, adding a dimension that cookies often lack.

- walled gardens (Facebook, Google, Amazon) face increasing competition from walled flower pots – that is, businesses with less data but a similar approach. Retailers like Walmart, Kroger, Target, and CVS have all started their own ad networks, drawing on their own customer data. Traditional publishers like Meredith have collected their formerly-scattered customer data to enable cross-channel, individual-level targeting.  Compilers like Neustar and Merkle are also entering the business. None of these has the data depth or scale of Facebook, Google, or Amazon but their audiences are big enough to be interesting. The various “universal ID” efforts being pursued by the ad industry will enable the different flow pots to cross-pollinate, creating larger audiences that I’ll call walled flower beds unless someone stops me.

- shoppable video is growing rapidly. Amazon seems unstoppable but it faces increasing competition from social networks, streaming TV, and every other digital channel that can let viewers make purchases related to what they’re watching. The numbers are still relatively small but the potential is huge. And note that this is a way to sell based purely on context, so targeting doesn’t have to be based on individual identities. That will become more important as privacy regulations become more effective at shutting off the flow of third-party personal data.

- digital out-of-home ads will combine with augmented and virtual reality to create a fundamentally new medium. The growth of digital out of home advertising is worth watching just because DOOH is such a great acronym  . But it’s also a huge story that doesn’t currently get much attention and will explode once people can travel more freely post-pandemic. Augmented and virtual reality are making great technical strides (how about an AR contact lens?) but so far seem like very niche marketing tools. However, the two technologies perfectly complement each other, and will be supercharged by more accessible location data. Watch this space.

Marketing Technology

- data will become more accessible. That marketers want to be “data-driven” is old news. What’s changing is that years of struggle are finally yielding progress toward making data more available and providing the tools to use it. As with digital advertising, the pandemic has accelerated an existing trend, achieving in months digital transformations that would otherwise have taken years.  Although internal data is the focus of most integration efforts, access to external data is also growing, privacy rules notwithstanding. Intent data has been a particular focus with recent announcements from TechTarget, ZoomInfo, Spiceworks Ziff Davis, and Zeta Global.

- artificial intelligence will become (even more) ubiquitous. It seems just yesterday that we were impressed to hear that a company’s product was “AI-powered”. Today, that’s as exciting as being told their offices have “electric lights”. But AI continues to grow stronger even if it doesn’t get as much attention (which the truly paranoid will suspect is because the AIs prefer it that way). Marketers increasingly worry that AI will ultimately replace them, even if it makes more productive before that happens. The headline story is that AI is taking on more “creative” tasks such as content creation and campaign design, which were once thought beyond its capabilities. But the real reason for its growth may be that interactions are shifting to digital channels where success will be based more on relentless analytics than an occasional flash of uniquely human insight.

- blockchain will quiet down. I’ll list blockchain only to point out that’s been an underachiever in the hype-generation department. Back in 2018 we saw it at least as often as AI. Now it comes up just rarely.  There are many clear applications in logistics and some promising proposals related to privacy. But there’s less wild-eyed talk about blockchain changing the world. Do keep an ear open, though: I suspect more is happening behind the scenes than we know.

- no-code will continue to grow. If anything has replaced AI as the buzzword of the year, it’s “no code” and related concepts like “self-service” and “citizen [whatever]”. It’s easy to make fun of these (“citizen brain surgeon”, anyone?) but there’s no doubt that many workers become more productive when they can automate processes without relying on IT professionals. The downside is the same loss of quality control and integration posed other types of shadow IT – although no-code systems are more often governed than true shadow IT projects.  In addition, no-code’s more sophisticated cousin, low-code, is widely used by IT professionals.  It’s possible to see no-code systems as an alternative to AI: both improve productivity, one by letting workers do more and other by replacing them altogether. But a more realistic view is to recognize AI as a key enabling technology inside many no-code systems. As the internal AIs get smarter, no-code will take on increasingly complex tasks, making it more helpful (and more threatening) to increasingly skilled workers.

Marketing

The pandemic has changed how marketers (and everyone else) do their work. With vaccines now reaching the public, it’s important to realize that conditions will change again fairly soon. But that doesn’t mean things will go back to how they were.

- events have changed forever. Yes, in-person events will return and many of us will welcome them with new appreciation for what we’ve missed. But tremendous innovation has occurred in on-line events and more will surely appear in coming months. It’s obvious that there will be a permanent shift towards more digital events, with in-person events reserved for situations where they offer a unique advantage. We can also expect in-person events to incorporate innovations developed for digital events – such as enhanced networking techniques and interactive presentations. I don’t think the significance of this has been fully recognized.  Bear in mind that live events are often the most important new business source for B2B marketers, so major changes in how they work will ramify throughout the marketing and sales process.

- remote work is here to stay. Like events, marketers’ worksites will drift away from the current nearly-all-digital mode to a mix of online and office-based activities. Also like events, innovations developed for remote work, such as improved collaboration tools, will be deployed in both situations. The key difference is that attendance of most events is optional, so attendees can walk away from dysfunctional changes. Workers have less choice about their environments, so harmful innovations such as employee surveillance and off-hours interruptions are harder for them to reject. Whether these stressors outweigh the benefits of remote work will depend on how well companies manage them, so we can expect a period of experimentation and turmoil as businesses learn what works best. With luck, this will mean new attention to workplace policies and management practices, something many firms have handled poorly in the past. Companies that excel at managing remote workers will have a new competitive advantage, especially since remote work lets the best workers choose from a wider variety of employers.

- privacy pressures will rise. The European Union’s General Data Protection Regulation (GDPR) wasn’t the first serious privacy rule or the only reason that privacy gained more attention. But its enforcement date of May 25, 2018 does mark the start of an escalating set of changes that impact what data is available to marketers and how consumers view use of their personal information. These changes will continue and companies will find it increasingly important to manage consumer data in ways that comply with ever-more-demanding regulations and give consumers confidence that their data is being handled appropriately. (A closely related subplot is continued security breaches as companies fail to secure their data despite best efforts.  Another is the continued misbehavior of Facebook and other social media firms and increasing resistance by regulators and consumers.  That one is worth a channel of its own.)  Marketers will need to take a more active role in privacy discussions, which have been dominated by legal, security, and IT staffs in businesses, and by consumer advocates, academics, and regulators in the political world. Earning a seat at that crowded table won’t be easy but making their voice heard is essential if marketers want the rules to reflect their needs.

- trust is under fire. This is a broad trend spanning continents and stretching back for years (see Martin Gurri’s uncannily prescient The Revolt of the Public, published in 2014),  Socially, the trend presents itself as a loss of trust in institutions, the benefits of technology, and credentialed experts in general. In marketing, it shows up as companies voicing disappointment with data-driven analytics and personalization, as consumers not trusting companies to manage or protect their data, as workers' fear that AI systems will harm creativity and codify unfair bias, as widely-noted gaps between what customers want and companies deliver, as “citizen developers” preferring to build their own systems, and as buyers preferring peers, Web searches, social media, and pretty much any other information source to analysts reports.  

Trust is the theme that connects all the stories I’ve listed above.  Without trust, consumers won’t share their data, respond to marketing messages, or try new channels; governments will push for more stringent privacy and business regulations; workers will be less productive; and all industry progress will move more slowly. The trust crisis is too broad for marketers fix by themselves. But they need to account for it in everything they do, adjusting their plans to include trust-building measures that might not have been needed in a healthier past.  The pandemic will end soon and technologies come and go.  But trust will be a story to follow for a long, long time.

Wednesday, November 11, 2020

Trust-Hub Maps Company Data for Privacy and Other Uses

Marketers care about privacy management primarily as it relates to customer data, but privacy management overlaps with a broader category of governance, risk, and compliance (GRC) systems that cover many data types. Like privacy systems (and Customer Data Platforms), GRC systems require an inventory of existing customer data, including systems, data elements within each system, and uses for each element.   These inventories form the foundation for functions including risk assessments, security, process documentation, responses to consumer data requests, and compliance monitoring.

Having a single inventory would be ideal. But each application needs the inventory to be presented in its own way. One reason so many different systems gather their own data inventories is that each is limited to its own type of presentation.

Trust-Hub Privacy Lens avoids this problem by creating a comprehensive data inventory and then enabling users to create whatever views they need. This requires gathering not just a list of data elements, but also documenting the users, systems, geographic locations, and business processes associated with each element. These attributes can then be filtered to create views tailored to a particular purpose. The system builds on this foundation by creating applications for related tasks such as risk analysis, privacy impact assessments, and security risk analysis. Users access the system through customizable dashboards that can highlight their particular concerns.

Privacy Lens offers a range of methods to collect its inventory. It can import existing information, such as spreadsheets prepared for s compliance reporting or security audits. It can read metadata from common systems including Salesforce and BigID or import metadata gathered by specialized discovery tools. When the data is not already assembled, Trust-Hub can scan existing data sources to create its own maps of data elements or let users enter information manually. In addition to data elements, the system can track business processes, user roles, individual users, resources, locations, external organizations, legal information, and evidence related to particular incidents. This information is all mapped against a master data model, helping users track what they’ve assembled and what’s still missing. The data is held in a graph database, Neo4J, a technology that is particularly good at tracking relationships among different elements.

Although some Privacy Lens users will focus on loading data into the system, most will be interested in using that data for specific purposes. Privacy Lens supports these with applications. Privacy managers, for example, can see an over-all privacy risk score, a list of open risks, a matrix that helps to prioritize risks by plotting them against frequency and impact, detailed reports on each risk, and additional risk scores for specific data types and processes. These risk scores are based on ten factors such as confidentiality, accuracy, volume, and regulation. The scores enable users to assess not just the risk of violating a privacy regulation, but risk of a security breach and the potential cost of such a breach. Trust-Hub argues that companies tend to focus on compliance risk even though the costs of litigation and reputation loss from a breach are vastly higher than any regulatory fines.

Privacy officers can also use the system to conduct formal assessments, such as Privacy Impact Assessments, by answering questions in a system-provided template. The system keeps a copy of each assessment report along with a snapshot of the data model when the report is created, making it easy to identify subsequent changes and how they might change the assessment. Compliance and security officers can conduct other assessments within the system, such as tracing risks created when data is shared with external business partners.

Risks uncovered during an assessment can be assigned a mitigation plan, with tasks assigned to individual users and reports tracking progress towards completion. Data in the model can also create other reports, such as Record of Processing Activity (ROPA), consent dates, and legal justifications. Personal data usage reports can take multiple perspectives, including which systems and processes use a particular data element, which elements are used by a particular system or process, and where a particular individual’s data is held.

Trust-Hub has two additional products that exploit the Privacy Lens data map. Privacy Hub loads actual customer data from mapped systems, where it can be used to respond to data requests by consumers (Data Subject Access Requests, or DSARs) or answer questions from business partners without revealing personal information (for example, to verify that a particular individual is over 18). Privacy Engine loads masked versions of personal data and makes it available for analysis, so that users can run reports and create lists without being given access to private data.

Trust-Hub was founded in 2016 and released its first product in 2018. The company now has more than one hundred clients, primarily large organizations selling directly to consumers, and service providers to those companies, such as consultants, system integrators, and law firms. Pricing is based on the number of users and starts around $25,000 per year.

Sunday, October 11, 2020

Twilio Buys CDP Segment for $3.2 Billion

Friday afternoon brought an unconfirmed Forbes report that communications platform Twilio is buying CDP Segment for $3.2 billion. (The all-stock deal was officially announced on Monday.)  It's Twilio’s third acquisition this year, following much smaller deals in January for telephony platform Teravoz and in July for IoT connector Electric Imp.  It comes two years after Twilio’s $3 billion purchase of email platform SendGrid.

The deal is intriguing from at least three perspectives:

Valuation: the $3.2 billion price is impressive by any standard. Segment’s current revenue isn’t known, although one published estimate put it at $180 million for 2019.  That sounds a bit high for a company with 450 employees at the time, but let's go with it and assume $200 million for 2020 revenue.  This has Twilio is paying 16x revenue, which is less than the 20x that Salesforce paid for Mulesoft ($6.5 billion on roughly $300 million) but in line with the 15x that Adobe paid for Marketo ($4.7 billion on $320 million)  or 14x that Twilio itself paid for SendGrid ($2 billion on $140 million when the deal was announced; the $3 billion price reflects the subsequent rise in Twilio’s stock). Note that these prices are well above the run-of-the-mill SaaS valuations, which are below 10x revenue.

Twilio: the SendGrid acquisition marked a major movement of Twilio beyond its base in telephone messaging to support a broader range of channels. If they’re to avoid the fragmentation that has plagued the larger marketing clouds, which also grew by acquisition, they need a CDP to unify their customer data. The big clouds (Oracle, Adobe, Salesforce, Microsoft, SAP) all chose to build their CDPs internally, but Twilio is much smaller and lacks the resources to do the same in a timely fashion. (Even the big clouds struggled, of course). On the other hand, Twilio’s surging stock price makes acquisition much easier. So buying a CDP they can deploy immediately gains them time and a mature product. It also offers entry to 20,000 accounts that might buy other Twilio products, especially given Segment’s position at the heart of their customer data infrastructure.

Of course, if Twilio really wants to compete with the marketing clouds, it will need to support other channels, most notably Web site management and ecommerce. Note that vendors beyond the clouds are pursuing the same strategy, including Acquia (which bought CDP AgilOne), IBM-spinoff Acoustic, MailChimp, and HubSpot. So the strategy isn’t unique, but it may be the only way for companies like Twilio to avoid being marginalized as apps that depend on major platforms controlled by other vendors. By definition, apps are easily replaced and are therefore easily commoditized. That’s a position to escape if you have the resources to expand beyond it.

CDP Industry: Segment is/was the largest independent CDP vendor, although Tealium and Treasure Data are close. Other recent CDP acquisitions were mostly mid-tier vendors (AgilOne, Evergage, QuickPivot, Lattice Engines, SessionM). Of these deals, only AgilOne seemed central to the product strategy of the buyers. Segment’s decision to sell rather than try to grow on its own may signal a recognition that it will be increasingly difficult to survive as a general-purpose independent CDP. We’ve already seen much of the industry shift to more defensible niches, including integrated marketing applications and vertical industry specialization. There’s certainly still a case to be made for an independent CDP as a way to avoid lock-in by broad marketing clouds. But there’s no doubt that the marketing cloud vendors’ own CDPs will grab some chunk of the market, and more will be lost to CDPs embedded in other systems (email, ecommerce, reservations, etc.), offered by service vendors (Mastercard, Vericast, TransUnion, etc.) and home-built on cloud platforms like Amazon Web Services and Google Cloud.

Given these pressures, we’re likely to see additional purchases of CDPs by companies who are trying to build their own complete marketing platforms, including Shopify, MailChimp, HubSpot, and a number of private-equity backed roll-ups. Faced with a daunting competitive situation, many CDP vendors will be interested in selling, even at prices that might not be as high as they once hoped.

Ironically, none of this bodes ill for the fundamental concept of the CDP itself. Companies will still need a central system to assemble and share unified customer profiles. It is indeed the platform on which the other platforms are built. Whether their CDP is stand-alone software or part of a larger solution doesn’t really matter from the user’s perspective: what matters is that clean, consistent, complete customer data is easily available to any system that needs it. Similarly, companies will still need the skills to build and manage CDPs.  Marketing, data, and IT departments will wrestle with customer data long into the future, and the winners will be best positioned to achieve business success. 


Friday, September 25, 2020

Software Review: Skypoint Cloud Combines CDP and Privacy Management

There are obvious similarities between Customer Data Platforms and privacy systems: both find customer data in all company systems; both assemble that data into unified profiles; and both govern access to those profiles. Indeed, some CDP vendors have expanded into privacy management by building consent modules to their systems or by integrating third-party consent managers.

Still, the line between CDP and privacy managers is usually clear: CDPs store customer data imported from other systems while privacy managers read the data in place. There might be a small gray area where the privacy system imports a little information to do identity matching or to build a map of what each source system contains. But it’s pretty easy to distinguish systems that build huge, detailed customer data sets from those that don’t. 

There’s an exception for every rule. Skypoint Cloud is a CDP that positions itself as a privacy system, including data mapping, consent management, and DSR (Data Subject Request) fulfillment. What makes it a CDP is that Skypoint ingests all customer data and builds its own profiles. Storing the data within the system actually makes fulfilling the privacy requirements easier, since Skypoint can provide customers with copies of their data by reading its own files and can ensure that data extracts contain only permitted information. Combining CDP and privacy in a single system also saves the duplicate effort of having two systems each map and read customer data in source systems.

The conceptual advantages of having one system for both CDP and privacy are obvious. But whether you’d want to use a combined system depends on how good it is at the functions themselves. This is really just an example of the general “suite vs best-of-breed” debate that applies across all systems types. 

You won’t be surprised that a young, small vendor like Skypoint lacks many refinements of more mature CDP systems. Most obviously, its scope is limited to ingesting data and assembling customer profiles, with just basic segmentation capabilities and no advanced analytics or personalization.  That’s only a problem if you want your CDP to include those features; many companies would rather use other tools for them anyway. There’s that “suite vs best-of-breed” choice again.

When it comes to assembling the unified database, Skypoint has a bit of a secret weapon: it relies heavily on Microsoft Azure Data Lake and Microsoft’s Common Data Model. Azure lets it scale effortlessly, avoiding one set of problems that often limit new products. Common Data Model lets Skypoint tap into an existing ecosystem of data connectors and applications, again saving Skypoint from developing those from scratch. Skypoint says they’re the only CDP vendor other than Microsoft itself to use the Common Data Model: so far as I know, that’s correct. (Microsoft, Adobe, SAP, and others are working on the Open Data Initiative that will map to the Common Data Model but we haven’t heard much about that recently.) 

How it works is this: Skypoint can pull in any raw data, using its own Web tag or other sources, and store it in the data lake. Users set up a data flow to ingest each source, using either the existing or custom-built connectors. The 200+ existing connectors cover most of the usual suspects, include Web analytics, ecommerce, CRM, marketing automation, personalization, chat, Data Management Platforms, email, mobile apps, data stores, and the big cloud platforms.

Each data flow maps the source data into data entities and relations, as defined in the Common Data Model or adjusted by the user. This is usually done before the data is loaded into the data lake but can also be done later to extract additional information from the raw input.  Skypoint applies machine learning to identify likely PII within source data and lets users then flag PII entities in the data map.  Users can also define SQL queries to create calculated values. 

Each flow has a privacy tab that lets the user specify which entities are returned by Data Subject Requests, whether data subjects can order the data erased, and which data processes use each entity. The data processes, which are defined separately, can include multiple entities with details about which entities are included and what consents are required. Users can set up different data processes for customers who are subject to different privacy regulations due to location or other reasons.

Once the data is available to the system, Skypoint can link records related to the same person using either rule-based (deterministic) matches or machine learning. It’s up to the client define her own matching rules. The system maintains its own persistent ID for each individual. Matches can be either incremental – only matching new inputs to existing IDs – or can rebuild the entire matching universe from scratch. Skypoint also supports real-time identity resolution through API calls from a Web tag.

After the matching is complete, the system merges its data into unified customer profiles. Skypoint provides a basic audience builder that lets users define selection conditions. This also leverages Skypoint's privacy features by first having users define the purpose of the audience and then making available only data entities that are permitted for that purpose. Users can also apply consent flags as variables within selection rules. Audiences can be connected with actions, which export data to other systems manually or through connectors.

Users can supplement the audience builder by creating their own apps with Microsoft Azure tools or let external systems access the data directly by connecting through the Common Data Model.

Back to privacy. Skypoint creates an online Privacy Center that lets customers consent to different uses of their data, make data access requests, and review company policy statements. It creates an internal queue of access requests and tracks their progress towards fulfillment. Users can specify information to be used in the privacy center, such as the privacy contact email and URLs of the policy statements. They can also create personalized email templates for privacy-related messages such as responses to access requests or requests to verify a requestor’s email address.

This is a nicely organized set of features that includes what most companies will need to meet privacy regulations. But the real value here is the integration with data management: gathering data for subject access requests is largely automated when data is mapped into the system through the data flows, a major improvement over the manual data assembly required by most privacy solutions. Similarly, the connection between data flows, audiences, and data processing definitions makes it easier to ensure the company uses only properly consented information. There are certainly gaps – in particular, data processes must be manually defined by users, so an undocumented process would be missed by the system. But that’s a fairly common approach among privacy products.

Pricing for Skypoint starts with a free version limited mostly to the privacy center, consent manager, and data access requests. Published pricing ranges past $2,000 per month for more than ten data integrations. The company was founded in 2019 and is just selling to its first clients.

Sunday, September 13, 2020

Software Review: Osano Manages Cookie Consent and Access Requests

The next stop on our privacy software tour is Osano, which bills itself as “the only privacy platform you’ll ever need”.  That's a bit of an overstatement: Osano is largely limited to data subject interactions, which is only one of the four primary privacy system functions I defined in my first post on this topic. . (The other three are: discovering personal data in company systems, defining policies for data use, and enforcing those policies.) But Osano handles the interactions quite well and adds several other functions that are unique. So it’s certainly worth knowing.

The two main types of data subject interactions are consent management and data subject access requests (DSARs). Osano offers structured, forms-based solutions to both of these, available in a Software-as-a-Service (Saas) model that lets users deploy them on Web sites with a single line of javascript or on Android and iOS mobile apps with an SDK.

The consent management solution provides a prebuilt interface that automatically adapts its dialog to local laws, using the geolocation to determine the site visitor's location.  There are versions for 40+ countries and 30+ languages, which Osano updates as local laws change. Because it is delivered as a SaaS platform, the changes made by Osano are automatically applied to its clients. This is a major time-saver for organizations that would otherwise need their own resources to monitor local laws and update their system to conform to changes.

Details will vary, but Osano generally lets Web visitors consent to or reject different cookie uses including essential, analytics, marketing, and personalization. Where required by laws like the California Consumer Protection Act (CCPA), it will also collect permission for data sharing. Osano stores these consents in a blockchain, which prevents anyone from tampering with them and provides legally-acceptable proof that consent was obtained. Osano retains only a hashed version of the visitor’s personal identifiers, thus avoiding the risk of a PII leak while still enabling users to search for consent on a known individual.

Osano’s use of blockchain to store consent records is unusual. Also unusual: Osano will search its client’s Website to check for first- and third-party cookies and scripts. The system will tentatively categorize these, let users confirm or change the classifications, and then let site visitors decide which cookies and scripts to allow or block. There’s an option to show visitors details about each cookie or script.

Osano also provides customer-facing forms to accept Data Subject Access Requests. The system backs these with an inventory of customer data, built by users who manually define systems, data elements, and system owners. Put another way: there’s no automated data discovery. The DSAR form collects the user’s information and then sends an authentication email to confirm they are who they claim.  Once the request is accepted, Osano sends notices to the owners of the related systems, specifying the data elements included and the action requested (review, change, delete, redact), and tracks the owners’ reports on completion of the required action. Osano doesn’t collect the data itself or make any changes in the source systems.

The one place where Osano does connect directly with source systems is through an API that tracks sharing of personal data with outside entities. This requires system users to embed an API call within each application or workflow that shares such data: again, there’s no automated discovery of such flows. Osano receives notification of data sharing as its happens, encrypts the personal identifiers, and stores it in a blockchain alone with event details. Users can search the blockchain for the encrypted identifiers to build a history of when each customer’s data was shared.

Perhaps the most unusual feature of Osano is the company’s database of privacy policies and related information for more than 11,000 companies. Osano gathers this data from public Web sites and has privacy attorneys review the contents and score each company on 163 data points.  This lets Osano rate firms based on the quality of their privacy processes. It runs Web spiders continuously check for changes and will adjust privacy ratings when appropriate. Osano also keeps watch on other information, such as data breach reports and lawsuits, which might also affect ratings. This lets Osano alert its clients if they are sharing data with a risky partner.

Osano is offered in a variety of configurations, ranging from free (cookie blocking only) to $199/month (cookie blocking and consent management for up to 50,000 monthly unique Web site visitors) to enterprise (all features, negotiated prices). The company was started in 2018 and says its free version is installed on more than 750,000 Web sites.

Sunday, September 06, 2020

When CDPs Fail: Insights from the CDP Institute Survey

We released a new member survey last week at the CDP Institute. You can (and should) download the full report, so I won’t go through all the details. You can also view a discussion of this on Scott Brinker's Chief Martech Show.  But here are three major findings. 

Martech Best Practices Matter 

We identified the top 20% of respondents as leaders, based on outcomes including over-all martech satisfaction, customer data unification, advanced privacy practices, and CDP deployment. We then compared martech practices of leaders vs. others. This is a slightly different approach from our previous surveys but the result was the same: the most successful companies deploy structured management methods, put a dedicated team within marketing inside of martech, and select their systems based on features and integration, not cost or familiarity. No surprise but still good to reaffirm. 




Martech Architectures are More Unified 

For years, our own and other surveys showed a frustratingly static 15%-20% of companies reporting access to unified customer data. This report finally showed a substantial increase, to 26% or 52% depending on whether you think feeding data into a marketing automation or CRM system qualifies as true unification. (Lots of data in the survey suggests not, incidentally.)


 

CDPs Are Making Good Progress 

The survey showed a sharp growth in CDP deployment, up from 19% in 2017 to 29% in 2020. Bear in mind that we’re surveying members of the CDP Institute, so this is not a representative industry sample. But it’s progress nevertheless. 


Where things got really interesting was a closer look at the relationship of customer data architectures to CDP status. You might think that pretty much everyone with a deployed CDP would have a unified customer database – after all, that’s the basic definition of a CDP and the numbers from the two questions are very close. But it turns out that just 43% of the respondents who said they had a deployed CDP also said they had a unified database (15% with the database alone and 28% with a database and shared orchestration engine). What’s going on here? 


 

The obvious answer is that people don’t understand what a CDP really is. Certainly we’ve heard that complaint many times. But these are CDP Institute members – a group that we know are generally smarter and better looking and, more to the point, should understand CDP accurately even if no one else does. Sure enough, when we look at the capabilities that people with a deployed CDP say they expect from a CDP, the rankings are virtually identical whether or not they report they have a unified database. 

(Do you like this chart format? It’s designed to highlight the differences in answers between the two groups while still showing the relative popularity of each item. It took many hours to get it to this stage. To clarify, the first number on each bar shows the percentage for the group that selected the answer less often and the second number shows the group that selected it more often. So, on the first bar above, 73% of people with a unified customer database said they felt a CDP should collect data from all sources and 76% of those without a unified database said the same. The color of the values and at the tip of the bar shows which group chose the item more often: green means it was more common among people with a unified database and red means it was more common among people without a unified database. Apologies if you’re colorblind.) 

Answers regarding CDP benefits were also pretty similar, although there begins to be an interesting divergence: respondents without a unified database were more likely to cite advanced applications including orchestration, message selection, and predictive models. Some CDPs offer those and some don’t, and it’s fair to think that people who prioritized them might consider themselves having a proper CDP deployment even if they haven’t unified all their data. 


But the differences in the benefits are still pretty minor. Where things really get interesting is when we look at obstacles to customer data use (not to CDP in particular). Here, there’s a huge divergence: people without a unified database were almost twice as likely to cite challenges assembling unified data and using that data. 


Combining this with previous answers, I read the results this way: people who say they have a deployed CDP but not a unified database know quite well that a CDP is supposed to create a unified database. They just haven’t been able to make that happen. 

This of course raises the question of Why? We see from the obstacle chart that the people without unified data are substantially more likely to cite IT resources as an issue, with smaller differences in senior management support and data extraction. It’s intriguing that they are actually less likely to cite organizational issues, marketing staff time, or budget. 

Going back to our martech practices, we also see that those without a unified database are more likely to employ “worst practices” of using outside consultants to compensate for internal weaknesses and letting each group within marketing select its own technology. They’re less likely to have a Center of Excellence, use agile techniques, or follow a long-term martech selection plan. (If the sequencing of this chart looks a bit odd, it's because they're arranged in order of total frequency, including respondents without a deployed CDP.  That items at the bottom of the chart have relatively high values shows that deployed CDP owners selected those items substantially more often than people without a CDP.)

 

So, whatever the problems with their IT staff, it seems at least some of their problems reflect martech management weaknesses as well. 

But There's More...

The survey report includes two other analyses that touch on this same theme of management maturity as a driver of success. The first focuses on cross-channel orchestration as a marker of CDP understanding.  It turns out that the closer people get to actually deploying a CDP, the less they see orchestration as a benefit. My interpretation is that orchestration is an appealing goal but, as people learn more about CDP, they realize a CDP alone can't deliver it.  They then give higher priority to less demanding benefits.   (To be clear: some CDPs do orchestration but there are other technical and organizational issues that must also be resolved.)  


We see a similar evolution in understanding of obstacles to customer data use. These also change across the CDP journey: organizational issues including management support, budget, and cooperation are most prominent at the start of the process. Once companies start deployment, technical challenges rise to the top.  Finally, after the CDP is deployed, the biggest problem is lack of marketing staff resources to take advantage of it. You may not be able to avoid this pattern, but it’s good to know what to expect. 


The other analysis looks at CDP results. In the current survey, 83% of respondents with a deployed CDP said it was delivering significant value while 17% said it was not. This figure has been stable: it was 16% in our 2017 survey and 18% in 2019. 

I compared the satisfied vs dissatisfied CDP owners and found they generally agreed on capabilities and benefits, with orchestration again popping out as an exception: 65% of dissatisfied CDP owners cited it as a CDP benefit compared with just 45% of the satisfied owners. By contrast, satisfied owners were more likely to cite the less demanding goals of improved segmentation, predictive modeling, and data management efficiency. Similarly, the satisfied CDP users were less likely to cite coordinated customer treatments as a CDP capability and more likely to cite data collection. (Data collection still topped the list for both groups, at 77% for the satisfied owners and 65% for the others.) 

When it came to obstacles, the dissatisfied owners were much more likely to cite IT and marketing staff limits and organizational cooperation. The divergence was even greater on measures of martech management, including selection, responsibility, and techniques. 


In short, the dissatisfied CDP owners were much less mature martech managers than their satisfied counterparts. As CDP adoption moves into the mainstream, it becomes even more important for managers to recognize that their success depends on more than the CDP technology itself. 

There’s more in the report, including information on privacy compliance, and breakouts by region, company size, and company type. Again, you can download it here for free.