Sunday, December 13, 2020

MarTech Plot Lines for 2021


“Apophenia” – seeing patterns where none exist – is both occupational hazard and job requirement for an industry analyst. The CDP Institute Daily Newsletter provides a steady supply of grist for my pattern detection mill. But the selection of items for that newsletter isn’t random. I have a list of long-running stories that I follow, and keep an eye out for items that illuminate them. I’ll share some of those below.

Feel free to play along at home and let me know what stories you see developing. Deep State conspiracy theories are out of bounds but you’re welcome to speculate on the actual author(s) of the works attributed to “Scott Brinker”. 

Media

Everyone knows the pandemic accelerated the shift towards online media that was already under way. A few points that haven’t been made quite so often include:

- connected TVs and other devices allow individual-level targeting without use of third-party cookies. As online advertising is increasingly delivered through those channels,  the death of cookies becomes less important. Nearly all device-level targeting can also include location data, adding a dimension that cookies often lack.

- walled gardens (Facebook, Google, Amazon) face increasing competition from walled flower pots – that is, businesses with less data but a similar approach. Retailers like Walmart, Kroger, Target, and CVS have all started their own ad networks, drawing on their own customer data. Traditional publishers like Meredith have collected their formerly-scattered customer data to enable cross-channel, individual-level targeting.  Compilers like Neustar and Merkle are also entering the business. None of these has the data depth or scale of Facebook, Google, or Amazon but their audiences are big enough to be interesting. The various “universal ID” efforts being pursued by the ad industry will enable the different flow pots to cross-pollinate, creating larger audiences that I’ll call walled flower beds unless someone stops me.

- shoppable video is growing rapidly. Amazon seems unstoppable but it faces increasing competition from social networks, streaming TV, and every other digital channel that can let viewers make purchases related to what they’re watching. The numbers are still relatively small but the potential is huge. And note that this is a way to sell based purely on context, so targeting doesn’t have to be based on individual identities. That will become more important as privacy regulations become more effective at shutting off the flow of third-party personal data.

- digital out-of-home ads will combine with augmented and virtual reality to create a fundamentally new medium. The growth of digital out of home advertising is worth watching just because DOOH is such a great acronym  . But it’s also a huge story that doesn’t currently get much attention and will explode once people can travel more freely post-pandemic. Augmented and virtual reality are making great technical strides (how about an AR contact lens?) but so far seem like very niche marketing tools. However, the two technologies perfectly complement each other, and will be supercharged by more accessible location data. Watch this space.

Marketing Technology

- data will become more accessible. That marketers want to be “data-driven” is old news. What’s changing is that years of struggle are finally yielding progress toward making data more available and providing the tools to use it. As with digital advertising, the pandemic has accelerated an existing trend, achieving in months digital transformations that would otherwise have taken years.  Although internal data is the focus of most integration efforts, access to external data is also growing, privacy rules notwithstanding. Intent data has been a particular focus with recent announcements from TechTarget, ZoomInfo, Spiceworks Ziff Davis, and Zeta Global.

- artificial intelligence will become (even more) ubiquitous. It seems just yesterday that we were impressed to hear that a company’s product was “AI-powered”. Today, that’s as exciting as being told their offices have “electric lights”. But AI continues to grow stronger even if it doesn’t get as much attention (which the truly paranoid will suspect is because the AIs prefer it that way). Marketers increasingly worry that AI will ultimately replace them, even if it makes more productive before that happens. The headline story is that AI is taking on more “creative” tasks such as content creation and campaign design, which were once thought beyond its capabilities. But the real reason for its growth may be that interactions are shifting to digital channels where success will be based more on relentless analytics than an occasional flash of uniquely human insight.

- blockchain will quiet down. I’ll list blockchain only to point out that’s been an underachiever in the hype-generation department. Back in 2018 we saw it at least as often as AI. Now it comes up just rarely.  There are many clear applications in logistics and some promising proposals related to privacy. But there’s less wild-eyed talk about blockchain changing the world. Do keep an ear open, though: I suspect more is happening behind the scenes than we know.

- no-code will continue to grow. If anything has replaced AI as the buzzword of the year, it’s “no code” and related concepts like “self-service” and “citizen [whatever]”. It’s easy to make fun of these (“citizen brain surgeon”, anyone?) but there’s no doubt that many workers become more productive when they can automate processes without relying on IT professionals. The downside is the same loss of quality control and integration posed other types of shadow IT – although no-code systems are more often governed than true shadow IT projects.  In addition, no-code’s more sophisticated cousin, low-code, is widely used by IT professionals.  It’s possible to see no-code systems as an alternative to AI: both improve productivity, one by letting workers do more and other by replacing them altogether. But a more realistic view is to recognize AI as a key enabling technology inside many no-code systems. As the internal AIs get smarter, no-code will take on increasingly complex tasks, making it more helpful (and more threatening) to increasingly skilled workers.

Marketing

The pandemic has changed how marketers (and everyone else) do their work. With vaccines now reaching the public, it’s important to realize that conditions will change again fairly soon. But that doesn’t mean things will go back to how they were.

- events have changed forever. Yes, in-person events will return and many of us will welcome them with new appreciation for what we’ve missed. But tremendous innovation has occurred in on-line events and more will surely appear in coming months. It’s obvious that there will be a permanent shift towards more digital events, with in-person events reserved for situations where they offer a unique advantage. We can also expect in-person events to incorporate innovations developed for digital events – such as enhanced networking techniques and interactive presentations. I don’t think the significance of this has been fully recognized.  Bear in mind that live events are often the most important new business source for B2B marketers, so major changes in how they work will ramify throughout the marketing and sales process.

- remote work is here to stay. Like events, marketers’ worksites will drift away from the current nearly-all-digital mode to a mix of online and office-based activities. Also like events, innovations developed for remote work, such as improved collaboration tools, will be deployed in both situations. The key difference is that attendance of most events is optional, so attendees can walk away from dysfunctional changes. Workers have less choice about their environments, so harmful innovations such as employee surveillance and off-hours interruptions are harder for them to reject. Whether these stressors outweigh the benefits of remote work will depend on how well companies manage them, so we can expect a period of experimentation and turmoil as businesses learn what works best. With luck, this will mean new attention to workplace policies and management practices, something many firms have handled poorly in the past. Companies that excel at managing remote workers will have a new competitive advantage, especially since remote work lets the best workers choose from a wider variety of employers.

- privacy pressures will rise. The European Union’s General Data Protection Regulation (GDPR) wasn’t the first serious privacy rule or the only reason that privacy gained more attention. But its enforcement date of May 25, 2018 does mark the start of an escalating set of changes that impact what data is available to marketers and how consumers view use of their personal information. These changes will continue and companies will find it increasingly important to manage consumer data in ways that comply with ever-more-demanding regulations and give consumers confidence that their data is being handled appropriately. (A closely related subplot is continued security breaches as companies fail to secure their data despite best efforts.  Another is the continued misbehavior of Facebook and other social media firms and increasing resistance by regulators and consumers.  That one is worth a channel of its own.)  Marketers will need to take a more active role in privacy discussions, which have been dominated by legal, security, and IT staffs in businesses, and by consumer advocates, academics, and regulators in the political world. Earning a seat at that crowded table won’t be easy but making their voice heard is essential if marketers want the rules to reflect their needs.

- trust is under fire. This is a broad trend spanning continents and stretching back for years (see Martin Gurri’s uncannily prescient The Revolt of the Public, published in 2014),  Socially, the trend presents itself as a loss of trust in institutions, the benefits of technology, and credentialed experts in general. In marketing, it shows up as companies voicing disappointment with data-driven analytics and personalization, as consumers not trusting companies to manage or protect their data, as workers' fear that AI systems will harm creativity and codify unfair bias, as widely-noted gaps between what customers want and companies deliver, as “citizen developers” preferring to build their own systems, and as buyers preferring peers, Web searches, social media, and pretty much any other information source to analysts reports.  

Trust is the theme that connects all the stories I’ve listed above.  Without trust, consumers won’t share their data, respond to marketing messages, or try new channels; governments will push for more stringent privacy and business regulations; workers will be less productive; and all industry progress will move more slowly. The trust crisis is too broad for marketers fix by themselves. But they need to account for it in everything they do, adjusting their plans to include trust-building measures that might not have been needed in a healthier past.  The pandemic will end soon and technologies come and go.  But trust will be a story to follow for a long, long time.

Wednesday, November 11, 2020

Trust-Hub Maps Company Data for Privacy and Other Uses

Marketers care about privacy management primarily as it relates to customer data, but privacy management overlaps with a broader category of governance, risk, and compliance (GRC) systems that cover many data types. Like privacy systems (and Customer Data Platforms), GRC systems require an inventory of existing customer data, including systems, data elements within each system, and uses for each element.   These inventories form the foundation for functions including risk assessments, security, process documentation, responses to consumer data requests, and compliance monitoring.

Having a single inventory would be ideal. But each application needs the inventory to be presented in its own way. One reason so many different systems gather their own data inventories is that each is limited to its own type of presentation.

Trust-Hub Privacy Lens avoids this problem by creating a comprehensive data inventory and then enabling users to create whatever views they need. This requires gathering not just a list of data elements, but also documenting the users, systems, geographic locations, and business processes associated with each element. These attributes can then be filtered to create views tailored to a particular purpose. The system builds on this foundation by creating applications for related tasks such as risk analysis, privacy impact assessments, and security risk analysis. Users access the system through customizable dashboards that can highlight their particular concerns.

Privacy Lens offers a range of methods to collect its inventory. It can import existing information, such as spreadsheets prepared for s compliance reporting or security audits. It can read metadata from common systems including Salesforce and BigID or import metadata gathered by specialized discovery tools. When the data is not already assembled, Trust-Hub can scan existing data sources to create its own maps of data elements or let users enter information manually. In addition to data elements, the system can track business processes, user roles, individual users, resources, locations, external organizations, legal information, and evidence related to particular incidents. This information is all mapped against a master data model, helping users track what they’ve assembled and what’s still missing. The data is held in a graph database, Neo4J, a technology that is particularly good at tracking relationships among different elements.

Although some Privacy Lens users will focus on loading data into the system, most will be interested in using that data for specific purposes. Privacy Lens supports these with applications. Privacy managers, for example, can see an over-all privacy risk score, a list of open risks, a matrix that helps to prioritize risks by plotting them against frequency and impact, detailed reports on each risk, and additional risk scores for specific data types and processes. These risk scores are based on ten factors such as confidentiality, accuracy, volume, and regulation. The scores enable users to assess not just the risk of violating a privacy regulation, but risk of a security breach and the potential cost of such a breach. Trust-Hub argues that companies tend to focus on compliance risk even though the costs of litigation and reputation loss from a breach are vastly higher than any regulatory fines.

Privacy officers can also use the system to conduct formal assessments, such as Privacy Impact Assessments, by answering questions in a system-provided template. The system keeps a copy of each assessment report along with a snapshot of the data model when the report is created, making it easy to identify subsequent changes and how they might change the assessment. Compliance and security officers can conduct other assessments within the system, such as tracing risks created when data is shared with external business partners.

Risks uncovered during an assessment can be assigned a mitigation plan, with tasks assigned to individual users and reports tracking progress towards completion. Data in the model can also create other reports, such as Record of Processing Activity (ROPA), consent dates, and legal justifications. Personal data usage reports can take multiple perspectives, including which systems and processes use a particular data element, which elements are used by a particular system or process, and where a particular individual’s data is held.

Trust-Hub has two additional products that exploit the Privacy Lens data map. Privacy Hub loads actual customer data from mapped systems, where it can be used to respond to data requests by consumers (Data Subject Access Requests, or DSARs) or answer questions from business partners without revealing personal information (for example, to verify that a particular individual is over 18). Privacy Engine loads masked versions of personal data and makes it available for analysis, so that users can run reports and create lists without being given access to private data.

Trust-Hub was founded in 2016 and released its first product in 2018. The company now has more than one hundred clients, primarily large organizations selling directly to consumers, and service providers to those companies, such as consultants, system integrators, and law firms. Pricing is based on the number of users and starts around $25,000 per year.

Sunday, October 11, 2020

Twilio Buys CDP Segment for $3.2 Billion

Friday afternoon brought an unconfirmed Forbes report that communications platform Twilio is buying CDP Segment for $3.2 billion. (The all-stock deal was officially announced on Monday.)  It's Twilio’s third acquisition this year, following much smaller deals in January for telephony platform Teravoz and in July for IoT connector Electric Imp.  It comes two years after Twilio’s $3 billion purchase of email platform SendGrid.

The deal is intriguing from at least three perspectives:

Valuation: the $3.2 billion price is impressive by any standard. Segment’s current revenue isn’t known, although one published estimate put it at $180 million for 2019.  That sounds a bit high for a company with 450 employees at the time, but let's go with it and assume $200 million for 2020 revenue.  This has Twilio is paying 16x revenue, which is less than the 20x that Salesforce paid for Mulesoft ($6.5 billion on roughly $300 million) but in line with the 15x that Adobe paid for Marketo ($4.7 billion on $320 million)  or 14x that Twilio itself paid for SendGrid ($2 billion on $140 million when the deal was announced; the $3 billion price reflects the subsequent rise in Twilio’s stock). Note that these prices are well above the run-of-the-mill SaaS valuations, which are below 10x revenue.

Twilio: the SendGrid acquisition marked a major movement of Twilio beyond its base in telephone messaging to support a broader range of channels. If they’re to avoid the fragmentation that has plagued the larger marketing clouds, which also grew by acquisition, they need a CDP to unify their customer data. The big clouds (Oracle, Adobe, Salesforce, Microsoft, SAP) all chose to build their CDPs internally, but Twilio is much smaller and lacks the resources to do the same in a timely fashion. (Even the big clouds struggled, of course). On the other hand, Twilio’s surging stock price makes acquisition much easier. So buying a CDP they can deploy immediately gains them time and a mature product. It also offers entry to 20,000 accounts that might buy other Twilio products, especially given Segment’s position at the heart of their customer data infrastructure.

Of course, if Twilio really wants to compete with the marketing clouds, it will need to support other channels, most notably Web site management and ecommerce. Note that vendors beyond the clouds are pursuing the same strategy, including Acquia (which bought CDP AgilOne), IBM-spinoff Acoustic, MailChimp, and HubSpot. So the strategy isn’t unique, but it may be the only way for companies like Twilio to avoid being marginalized as apps that depend on major platforms controlled by other vendors. By definition, apps are easily replaced and are therefore easily commoditized. That’s a position to escape if you have the resources to expand beyond it.

CDP Industry: Segment is/was the largest independent CDP vendor, although Tealium and Treasure Data are close. Other recent CDP acquisitions were mostly mid-tier vendors (AgilOne, Evergage, QuickPivot, Lattice Engines, SessionM). Of these deals, only AgilOne seemed central to the product strategy of the buyers. Segment’s decision to sell rather than try to grow on its own may signal a recognition that it will be increasingly difficult to survive as a general-purpose independent CDP. We’ve already seen much of the industry shift to more defensible niches, including integrated marketing applications and vertical industry specialization. There’s certainly still a case to be made for an independent CDP as a way to avoid lock-in by broad marketing clouds. But there’s no doubt that the marketing cloud vendors’ own CDPs will grab some chunk of the market, and more will be lost to CDPs embedded in other systems (email, ecommerce, reservations, etc.), offered by service vendors (Mastercard, Vericast, TransUnion, etc.) and home-built on cloud platforms like Amazon Web Services and Google Cloud.

Given these pressures, we’re likely to see additional purchases of CDPs by companies who are trying to build their own complete marketing platforms, including Shopify, MailChimp, HubSpot, and a number of private-equity backed roll-ups. Faced with a daunting competitive situation, many CDP vendors will be interested in selling, even at prices that might not be as high as they once hoped.

Ironically, none of this bodes ill for the fundamental concept of the CDP itself. Companies will still need a central system to assemble and share unified customer profiles. It is indeed the platform on which the other platforms are built. Whether their CDP is stand-alone software or part of a larger solution doesn’t really matter from the user’s perspective: what matters is that clean, consistent, complete customer data is easily available to any system that needs it. Similarly, companies will still need the skills to build and manage CDPs.  Marketing, data, and IT departments will wrestle with customer data long into the future, and the winners will be best positioned to achieve business success. 


Friday, September 25, 2020

Software Review: Skypoint Cloud Combines CDP and Privacy Management

There are obvious similarities between Customer Data Platforms and privacy systems: both find customer data in all company systems; both assemble that data into unified profiles; and both govern access to those profiles. Indeed, some CDP vendors have expanded into privacy management by building consent modules to their systems or by integrating third-party consent managers.

Still, the line between CDP and privacy managers is usually clear: CDPs store customer data imported from other systems while privacy managers read the data in place. There might be a small gray area where the privacy system imports a little information to do identity matching or to build a map of what each source system contains. But it’s pretty easy to distinguish systems that build huge, detailed customer data sets from those that don’t. 

There’s an exception for every rule. Skypoint Cloud is a CDP that positions itself as a privacy system, including data mapping, consent management, and DSR (Data Subject Request) fulfillment. What makes it a CDP is that Skypoint ingests all customer data and builds its own profiles. Storing the data within the system actually makes fulfilling the privacy requirements easier, since Skypoint can provide customers with copies of their data by reading its own files and can ensure that data extracts contain only permitted information. Combining CDP and privacy in a single system also saves the duplicate effort of having two systems each map and read customer data in source systems.

The conceptual advantages of having one system for both CDP and privacy are obvious. But whether you’d want to use a combined system depends on how good it is at the functions themselves. This is really just an example of the general “suite vs best-of-breed” debate that applies across all systems types. 

You won’t be surprised that a young, small vendor like Skypoint lacks many refinements of more mature CDP systems. Most obviously, its scope is limited to ingesting data and assembling customer profiles, with just basic segmentation capabilities and no advanced analytics or personalization.  That’s only a problem if you want your CDP to include those features; many companies would rather use other tools for them anyway. There’s that “suite vs best-of-breed” choice again.

When it comes to assembling the unified database, Skypoint has a bit of a secret weapon: it relies heavily on Microsoft Azure Data Lake and Microsoft’s Common Data Model. Azure lets it scale effortlessly, avoiding one set of problems that often limit new products. Common Data Model lets Skypoint tap into an existing ecosystem of data connectors and applications, again saving Skypoint from developing those from scratch. Skypoint says they’re the only CDP vendor other than Microsoft itself to use the Common Data Model: so far as I know, that’s correct. (Microsoft, Adobe, SAP, and others are working on the Open Data Initiative that will map to the Common Data Model but we haven’t heard much about that recently.) 

How it works is this: Skypoint can pull in any raw data, using its own Web tag or other sources, and store it in the data lake. Users set up a data flow to ingest each source, using either the existing or custom-built connectors. The 200+ existing connectors cover most of the usual suspects, include Web analytics, ecommerce, CRM, marketing automation, personalization, chat, Data Management Platforms, email, mobile apps, data stores, and the big cloud platforms.

Each data flow maps the source data into data entities and relations, as defined in the Common Data Model or adjusted by the user. This is usually done before the data is loaded into the data lake but can also be done later to extract additional information from the raw input.  Skypoint applies machine learning to identify likely PII within source data and lets users then flag PII entities in the data map.  Users can also define SQL queries to create calculated values. 

Each flow has a privacy tab that lets the user specify which entities are returned by Data Subject Requests, whether data subjects can order the data erased, and which data processes use each entity. The data processes, which are defined separately, can include multiple entities with details about which entities are included and what consents are required. Users can set up different data processes for customers who are subject to different privacy regulations due to location or other reasons.

Once the data is available to the system, Skypoint can link records related to the same person using either rule-based (deterministic) matches or machine learning. It’s up to the client define her own matching rules. The system maintains its own persistent ID for each individual. Matches can be either incremental – only matching new inputs to existing IDs – or can rebuild the entire matching universe from scratch. Skypoint also supports real-time identity resolution through API calls from a Web tag.

After the matching is complete, the system merges its data into unified customer profiles. Skypoint provides a basic audience builder that lets users define selection conditions. This also leverages Skypoint's privacy features by first having users define the purpose of the audience and then making available only data entities that are permitted for that purpose. Users can also apply consent flags as variables within selection rules. Audiences can be connected with actions, which export data to other systems manually or through connectors.

Users can supplement the audience builder by creating their own apps with Microsoft Azure tools or let external systems access the data directly by connecting through the Common Data Model.

Back to privacy. Skypoint creates an online Privacy Center that lets customers consent to different uses of their data, make data access requests, and review company policy statements. It creates an internal queue of access requests and tracks their progress towards fulfillment. Users can specify information to be used in the privacy center, such as the privacy contact email and URLs of the policy statements. They can also create personalized email templates for privacy-related messages such as responses to access requests or requests to verify a requestor’s email address.

This is a nicely organized set of features that includes what most companies will need to meet privacy regulations. But the real value here is the integration with data management: gathering data for subject access requests is largely automated when data is mapped into the system through the data flows, a major improvement over the manual data assembly required by most privacy solutions. Similarly, the connection between data flows, audiences, and data processing definitions makes it easier to ensure the company uses only properly consented information. There are certainly gaps – in particular, data processes must be manually defined by users, so an undocumented process would be missed by the system. But that’s a fairly common approach among privacy products.

Pricing for Skypoint starts with a free version limited mostly to the privacy center, consent manager, and data access requests. Published pricing ranges past $2,000 per month for more than ten data integrations. The company was founded in 2019 and is just selling to its first clients.

Sunday, September 13, 2020

Software Review: Osano Manages Cookie Consent and Access Requests

The next stop on our privacy software tour is Osano, which bills itself as “the only privacy platform you’ll ever need”.  That's a bit of an overstatement: Osano is largely limited to data subject interactions, which is only one of the four primary privacy system functions I defined in my first post on this topic. . (The other three are: discovering personal data in company systems, defining policies for data use, and enforcing those policies.) But Osano handles the interactions quite well and adds several other functions that are unique. So it’s certainly worth knowing.

The two main types of data subject interactions are consent management and data subject access requests (DSARs). Osano offers structured, forms-based solutions to both of these, available in a Software-as-a-Service (Saas) model that lets users deploy them on Web sites with a single line of javascript or on Android and iOS mobile apps with an SDK.

The consent management solution provides a prebuilt interface that automatically adapts its dialog to local laws, using the geolocation to determine the site visitor's location.  There are versions for 40+ countries and 30+ languages, which Osano updates as local laws change. Because it is delivered as a SaaS platform, the changes made by Osano are automatically applied to its clients. This is a major time-saver for organizations that would otherwise need their own resources to monitor local laws and update their system to conform to changes.

Details will vary, but Osano generally lets Web visitors consent to or reject different cookie uses including essential, analytics, marketing, and personalization. Where required by laws like the California Consumer Protection Act (CCPA), it will also collect permission for data sharing. Osano stores these consents in a blockchain, which prevents anyone from tampering with them and provides legally-acceptable proof that consent was obtained. Osano retains only a hashed version of the visitor’s personal identifiers, thus avoiding the risk of a PII leak while still enabling users to search for consent on a known individual.

Osano’s use of blockchain to store consent records is unusual. Also unusual: Osano will search its client’s Website to check for first- and third-party cookies and scripts. The system will tentatively categorize these, let users confirm or change the classifications, and then let site visitors decide which cookies and scripts to allow or block. There’s an option to show visitors details about each cookie or script.

Osano also provides customer-facing forms to accept Data Subject Access Requests. The system backs these with an inventory of customer data, built by users who manually define systems, data elements, and system owners. Put another way: there’s no automated data discovery. The DSAR form collects the user’s information and then sends an authentication email to confirm they are who they claim.  Once the request is accepted, Osano sends notices to the owners of the related systems, specifying the data elements included and the action requested (review, change, delete, redact), and tracks the owners’ reports on completion of the required action. Osano doesn’t collect the data itself or make any changes in the source systems.

The one place where Osano does connect directly with source systems is through an API that tracks sharing of personal data with outside entities. This requires system users to embed an API call within each application or workflow that shares such data: again, there’s no automated discovery of such flows. Osano receives notification of data sharing as its happens, encrypts the personal identifiers, and stores it in a blockchain alone with event details. Users can search the blockchain for the encrypted identifiers to build a history of when each customer’s data was shared.

Perhaps the most unusual feature of Osano is the company’s database of privacy policies and related information for more than 11,000 companies. Osano gathers this data from public Web sites and has privacy attorneys review the contents and score each company on 163 data points.  This lets Osano rate firms based on the quality of their privacy processes. It runs Web spiders continuously check for changes and will adjust privacy ratings when appropriate. Osano also keeps watch on other information, such as data breach reports and lawsuits, which might also affect ratings. This lets Osano alert its clients if they are sharing data with a risky partner.

Osano is offered in a variety of configurations, ranging from free (cookie blocking only) to $199/month (cookie blocking and consent management for up to 50,000 monthly unique Web site visitors) to enterprise (all features, negotiated prices). The company was started in 2018 and says its free version is installed on more than 750,000 Web sites.

Sunday, September 06, 2020

When CDPs Fail: Insights from the CDP Institute Survey

We released a new member survey last week at the CDP Institute. You can (and should) download the full report, so I won’t go through all the details. You can also view a discussion of this on Scott Brinker's Chief Martech Show.  But here are three major findings. 

Martech Best Practices Matter 

We identified the top 20% of respondents as leaders, based on outcomes including over-all martech satisfaction, customer data unification, advanced privacy practices, and CDP deployment. We then compared martech practices of leaders vs. others. This is a slightly different approach from our previous surveys but the result was the same: the most successful companies deploy structured management methods, put a dedicated team within marketing inside of martech, and select their systems based on features and integration, not cost or familiarity. No surprise but still good to reaffirm. 




Martech Architectures are More Unified 

For years, our own and other surveys showed a frustratingly static 15%-20% of companies reporting access to unified customer data. This report finally showed a substantial increase, to 26% or 52% depending on whether you think feeding data into a marketing automation or CRM system qualifies as true unification. (Lots of data in the survey suggests not, incidentally.)


 

CDPs Are Making Good Progress 

The survey showed a sharp growth in CDP deployment, up from 19% in 2017 to 29% in 2020. Bear in mind that we’re surveying members of the CDP Institute, so this is not a representative industry sample. But it’s progress nevertheless. 


Where things got really interesting was a closer look at the relationship of customer data architectures to CDP status. You might think that pretty much everyone with a deployed CDP would have a unified customer database – after all, that’s the basic definition of a CDP and the numbers from the two questions are very close. But it turns out that just 43% of the respondents who said they had a deployed CDP also said they had a unified database (15% with the database alone and 28% with a database and shared orchestration engine). What’s going on here? 


 

The obvious answer is that people don’t understand what a CDP really is. Certainly we’ve heard that complaint many times. But these are CDP Institute members – a group that we know are generally smarter and better looking and, more to the point, should understand CDP accurately even if no one else does. Sure enough, when we look at the capabilities that people with a deployed CDP say they expect from a CDP, the rankings are virtually identical whether or not they report they have a unified database. 

(Do you like this chart format? It’s designed to highlight the differences in answers between the two groups while still showing the relative popularity of each item. It took many hours to get it to this stage. To clarify, the first number on each bar shows the percentage for the group that selected the answer less often and the second number shows the group that selected it more often. So, on the first bar above, 73% of people with a unified customer database said they felt a CDP should collect data from all sources and 76% of those without a unified database said the same. The color of the values and at the tip of the bar shows which group chose the item more often: green means it was more common among people with a unified database and red means it was more common among people without a unified database. Apologies if you’re colorblind.) 

Answers regarding CDP benefits were also pretty similar, although there begins to be an interesting divergence: respondents without a unified database were more likely to cite advanced applications including orchestration, message selection, and predictive models. Some CDPs offer those and some don’t, and it’s fair to think that people who prioritized them might consider themselves having a proper CDP deployment even if they haven’t unified all their data. 


But the differences in the benefits are still pretty minor. Where things really get interesting is when we look at obstacles to customer data use (not to CDP in particular). Here, there’s a huge divergence: people without a unified database were almost twice as likely to cite challenges assembling unified data and using that data. 


Combining this with previous answers, I read the results this way: people who say they have a deployed CDP but not a unified database know quite well that a CDP is supposed to create a unified database. They just haven’t been able to make that happen. 

This of course raises the question of Why? We see from the obstacle chart that the people without unified data are substantially more likely to cite IT resources as an issue, with smaller differences in senior management support and data extraction. It’s intriguing that they are actually less likely to cite organizational issues, marketing staff time, or budget. 

Going back to our martech practices, we also see that those without a unified database are more likely to employ “worst practices” of using outside consultants to compensate for internal weaknesses and letting each group within marketing select its own technology. They’re less likely to have a Center of Excellence, use agile techniques, or follow a long-term martech selection plan. (If the sequencing of this chart looks a bit odd, it's because they're arranged in order of total frequency, including respondents without a deployed CDP.  That items at the bottom of the chart have relatively high values shows that deployed CDP owners selected those items substantially more often than people without a CDP.)

 

So, whatever the problems with their IT staff, it seems at least some of their problems reflect martech management weaknesses as well. 

But There's More...

The survey report includes two other analyses that touch on this same theme of management maturity as a driver of success. The first focuses on cross-channel orchestration as a marker of CDP understanding.  It turns out that the closer people get to actually deploying a CDP, the less they see orchestration as a benefit. My interpretation is that orchestration is an appealing goal but, as people learn more about CDP, they realize a CDP alone can't deliver it.  They then give higher priority to less demanding benefits.   (To be clear: some CDPs do orchestration but there are other technical and organizational issues that must also be resolved.)  


We see a similar evolution in understanding of obstacles to customer data use. These also change across the CDP journey: organizational issues including management support, budget, and cooperation are most prominent at the start of the process. Once companies start deployment, technical challenges rise to the top.  Finally, after the CDP is deployed, the biggest problem is lack of marketing staff resources to take advantage of it. You may not be able to avoid this pattern, but it’s good to know what to expect. 


The other analysis looks at CDP results. In the current survey, 83% of respondents with a deployed CDP said it was delivering significant value while 17% said it was not. This figure has been stable: it was 16% in our 2017 survey and 18% in 2019. 

I compared the satisfied vs dissatisfied CDP owners and found they generally agreed on capabilities and benefits, with orchestration again popping out as an exception: 65% of dissatisfied CDP owners cited it as a CDP benefit compared with just 45% of the satisfied owners. By contrast, satisfied owners were more likely to cite the less demanding goals of improved segmentation, predictive modeling, and data management efficiency. Similarly, the satisfied CDP users were less likely to cite coordinated customer treatments as a CDP capability and more likely to cite data collection. (Data collection still topped the list for both groups, at 77% for the satisfied owners and 65% for the others.) 

When it came to obstacles, the dissatisfied owners were much more likely to cite IT and marketing staff limits and organizational cooperation. The divergence was even greater on measures of martech management, including selection, responsibility, and techniques. 


In short, the dissatisfied CDP owners were much less mature martech managers than their satisfied counterparts. As CDP adoption moves into the mainstream, it becomes even more important for managers to recognize that their success depends on more than the CDP technology itself. 

There’s more in the report, including information on privacy compliance, and breakouts by region, company size, and company type. Again, you can download it here for free.

Thursday, August 27, 2020

Software Review: BigID for Privacy Data Discovery

Until recently, most marketers were content to leave privacy compliance in the hands of data and legal teams. But laws like GDPR and CCPA now require increasingly prominent consent notifications and impose increasingly stringent limits on data use. This means marketers must become increasingly involved with the privacy systems to ensure a positive customer experience, gain access to the data they need, and ensure they use the data appropriately. 

I feel your pain: it’s another chore for your already-full agenda.  But no one else can represent marketers’ perspectives as companies decide how to implement expanded privacy programs.  If you want to see what happens when marketers are not involved, just check out the customer-hostile consent notices and privacy policies on most Web sites.

To ease the burden a bit, I’m going to start reviewing privacy systems in this blog. The first step is to define a framework of the functions required for a privacy solution.   This gives a checklist of components so you know when you have a complete set. Of course, you’ll also need a more detailed checklist for each component so you can judge whether a particular system is adequate for the task. But let’s not get ahead of ourselves. 

At the highest level, the components of a privacy solution are:

  • Data discovery.  This is searching company systems to build a catalog of sensitive data, including the type and location of each item. Discovery borders on data governance, quality, and identity resolution, although these are generally outside the scope of a privacy system. Identity resolution is on the border because responding to data subject requests (see next section) requires assembling all data belonging to the same person. Some privacy systems include identity resolution to make this possible, but others rely on external systems to provide a personal ID to use as a link.

  • Data subject interactions.  These are interactions between the system and the people whose data it holds (“data subjects”).  The main interactions are to gather consent when the data is collected and to respond to subsequent “data subject access requests” (DSARs) to view, update, export, or delete their data. Consent collection and request processing are distinct processes.  But they are certainly related and both require customer interactions.  So it makes sense to consider them together. They are also where marketers are most likely to be directly involved in privacy programs.

  • Policy definition.  This specifies how each data type can be used.  There are often different rules based on location (usually where the data subject resides or is a citizen, but sometimes where the data is captured, where it’s stored, etc.), consent status, purpose, person or organization using the data, and other variables. Since regulations and company policies change frequently, this component includes processes to identify changes and either automatically adjust rules to reflect them or alert managers that adjustments may be needed.

  • Policy application.  This monitors how data is actually used to ensure it complies with policies, send alerts if something is not compliant, and keep records of what’s done. Marketers may be heavily involved here but more as system users than system managers. Policy application is often limited to assessing data requests that are executed in other systems but it sometimes includes actions such as generating lists for marketing campaigns. It also includes security functions related specifically to data privacy, such as rules for masking of sensitive data or practices to prevent and react to data breaches. Again, security features may be limited to checking that rules are followed or include running the processes themselves. Security features in the privacy system are likely to work with corporate security systems in at least some areas, such as user access management. If general security systems are adequate, there may be no need for separate privacy security features. 

Bear in mind that one system need not provide all these functions.  Companies may prefer to stitch together several “best of breed” components or to find a privacy solution within a larger system. They might even use different privacy components from several larger systems, for example using a consent manager built into a Customer Data Platform and a data access manager built into a database’s core security functions. 

Whew.

Now that we have a framework, let's apply it to a specific product.  We'll start with BigID.

Data Discovery

BigID is a specialist in data discovery. The system applies a particularly robust set of automated tools to examine and classify all types of data – structured, semi-structured, and unstructured; cloud and on-premise; in any language. For identified items, it builds a list showing the application, object name, data type, server, geographic location, and other details. 

Of course, an item list is table stakes for data discovery.  BigID goes beyond this to organize the items into clusters related to particular purposes, such as medical claims, invoices, and employee information. It also draws maps of relations across data sources, such as how the transaction ID in one table connects to the transaction ID in another table (even if the field names are not the same). Other features highlight data sources holding sensitive information, alert users if these are not properly secured from unauthorized access, and calculate privacy risk scores. 

The relationship maps provide a foundation for identity resolution, since BigID can compare values across systems to find matches and use the results to stitch together related records. The system supports fuzzy as well as exact matches and can compare combinations of items (such as street, city, and zip) in one rule.  But the matching is done by reading data from source systems for one person at a time, usually in response to an access request. This means that BigID could assemble a profile of an individual customer but won’t create the persistent profiles you’d see in a Customer Data Platform or other type of customer database. It also can’t pull the data together quickly enough to support real-time Web site personalization, although it might be fast enough for a call center. 

In fact, BigID doesn’t store any data outside of the source systems except for metadata.  So there's no reason to confuse it with a data lake, data warehouse, CRM, or CDP.

Data Subject Interactions

BigID doesn’t offer interfaces to capture consent but does provide applications that let data subjects view, edit, and delete their data and update preferences. When a data access request is submitted, the system creates a case that is sent to other systems or people to execute. BigID provides a workflow to track the status of these cases but won’t directly change data in source systems. 

Policy Definition 

BigID doesn’t have an integrated policy management system that lets users define and enforce data privacy rules. But it does have several components to support the process:

  • "Agreements" let users document the consent terms and conditions associated with specific items. This does not extend to checking the status of consent for a particular individual but does create a way to check whether a consent-gathering option is available for an item.

  • “Business flows” map the movement of data through business processes such as reviewing a resume or onboarding a new customer. Users can document flows manually or let the system discover them in the data it collects during its scan of company systems. Users specify which items are used within a flow and the legal justification for using sensitive items. The system will compare this with the list of consent agreements and alert users if an item is not properly authorized. BigID will also alert process owners if a scan uncovers a sensitive new data item in a source system.  The owner can then indicate whether the business flow uses the new item and attach a justification. BigID also uses the business flows to create reports, required by some regulations, on how personal data is used and with whom it is shared. 

  • “Policies” let users define queries to find data in specified situations, such as EU citizen data stored outside the EU. The system runs these automatically each time it scans the company systems. Query results can create an alert or task for someone to investigate. Policies are not connected to agreements or business flows, although this may change in the future. 

Policy Enforcement

BigID doesn’t directly control any data processing, so it can’t enforce privacy rules. But the alerts issued by the policy, agreement, and business flow components do help users to identify violations. Alerts can create tasks in workflow systems to ensure they are examined and resolved. The system also lets users define workflows to assess and manage a data breach should one occur. 

Technology 

 As previously mentioned, BigID reads data from source systems without making its own copies or changes any data in those systems. Clients can run it in the cloud or on-premises. System functions are exposed via APIs which let the company, clients, or third parties build apps on top of the core product. In fact, the data subject access request and preference portal functions are among the applications that BigID created for itself. It recently launched an app marketplace to make its own and third party apps more easily available to its clients. 

Business 

BigID has raised $146 million in venture funding and reports nearly 200 employees. Pricing is based on the number of data sources: the company doesn’t release details but it’s not cheap. It also doesn’t release the number of clients but says the count is “substantial” and that most are large enterprises.

Tuesday, August 18, 2020

Data Security is a Problem Marketers Must Help Fix


Everything you need to know about 2020 is covered by the fact that “apocalypse bingo” is already an over-used cliché. So I doubt many marketers have found spare time to worry about data security – which most would consider someone else’s problem. But bear in mind that 92% of consumers say they would avoid a company after a data breach. So, like it or not, security is a marketer’s problem too. 

Unfortunately, the problem is a big one. I recently took a quick scan of research on the issue, prompted in particular by a headline that nearly half of companies release software they know contains security flaws.  Sounds irresponsible, don't you think?  The main culprit in that case is pressure to meet deadlines, compounded by poor training in security procedures. If there’s any good news, it’s that the most-used applications have fewer unresolved security flaws than average, suggesting that developers pay more attention when they know it’s most important. 

The research is not reassuring. It may be a self-fulfilling prophecy, but most security professionals see data breaches as inevitable. Indeed, many think a breach is good for their career, presumably because the experience makes them better at handling the next one. Let’s just be grateful they're not airline pilots. 

Still, the professionals have a point. Nearly every company reports a business-impacting cyberattack in the past twelve months. Even before COVID-19, fewer than half of IT experts were confident their organizations can stop data breaches with current resources.

The problems are legion. In addition to deadline pressures and poor training, researchers cite poorly vetted third-party code libraries, charmingly described as “shadow code”; compromised employee accounts, insecure cloud configurations, and attacks on Internet of Things devices.

Insecure work-from-home practices during the pandemic only add new risk. One bit of good news is that CIOs are spending more on security,  prioritizing access management and remote enablement. 

What’s a marketer to do?  One choice is to just shift your attention to something less stressful, like fire tornados and murder hornets. It’s been a tough year: I won’t judge. 

But you can also address the problem. System security in general is managed outside of most marketing departments. But marketers can still ensure their own teams are careful when handling customer data (see this handy list of tips from the CDP Institute). 

Marketers can also take a closer look at privacy compliance projects, which often require tighter controls on access to customer data. Here’s an overview of what that stack looks like.  CDP Institute also has a growing library of papers on the the topic.

Vendors like TrustArc, BigID, OneTrust, Privitar, and many others, offer packaged solutions to address these issues. So do many CDP vendors. Those solutions involve customer interactions, such as consent gathering and response to Data Subject Access Requests.  Marketers should help design those interactions, which are critical in convincing consumers to share personal data that marketers need for success. The policies and processes underlying those interfaces are even more important for delivering on the promises the interfaces make. 

In short, while privacy and security are not the same thing, any privacy solution includes a major security component. Marketers can play a major role in ensuring their company builds solid solutions for both. 

Or you can worry about locusts

 

Saturday, July 25, 2020

Don't Misuse Proof of Concept in System Selection

Call me a cock-eyed optimist, but marketers may actually be getting better at buying software. Our research has long shown that the most satisfied buyers base their selection on features, not cost or ease of use. But feature lists alone are never enough: even if buyers had the knowledge and patience to precisely define their actual requirements, no set of checkboxes could capture the nuance of what it’s actually like to use a piece of software for a specific task. This is why experts like Tony Byrne at Real Story Group argue instead for defining key use cases (a.k.a. user stories) and having vendors demonstrate those. (If you really want to be trendy, you can call this a Clayton Christensen-style “job to be done”.)

In fact, use cases have become something of an obsession in their own right. This is partly because they are a way of getting concrete answers about the value of a system: when someone asks, “What’s the use case for system X”, they’re really asking, “How will I benefit from buying it?” That’s quite different from the classic definition of a use case as a series of steps to achieve a task. It’s this traditional definition that matters when you apply use cases to system selection, since you want the use case to specify the features to be demonstrated. You can download the CDP Institute’s use case template here.

But I suspect the real reason use cases have become so popular is that they offer a shortcut past the swamp of defining comprehensive system requirements. Buyers in general, and marketers in particular, lack the time and resources to create complete requirements lists based on their actual needs (although they're perfectly capable of copying huge, generic lists that apply to no one).  Many buyers are convinced it’s not necessary and perhaps not even possible to build meaningful requirements lists: they point to the old-school “waterfall” approach used in systems design, which routinely takes too long and produces unsatisfactory results. Instead, buyers correctly see use cases as part of an agile methodology that evolves a solution by solving a sequence of concrete, near-term objectives.

Of course, any agile expert will freely admit that chasing random enhancements is not enough.  There also needs to be an underlying framework to ensure the product can mature without extensive rework. The same applies to software selection: a collection of use cases will not necessarily test all the features you’ll ultimately need. There’s an unstated but, I think, implicit assumption that use cases are a type of sampling technique: that is, a system that meets the requirements of the selected use cases will also meet other, untested requirements.   It’s a dangerous assumption. (To be clear: a system that can’t support the selected use cases is proven inadequate. So sample use cases do provide a valuable screening function.)

Consciously or subconsciously, smart buyers know that sample use cases are not enough. This may be why I’ve recently noticed a sharp rise in the use of proof of concept (POC) tests. Those go beyond watching a demonstration of selected use cases to actually instal a trial version of a system and seeihow it runs. This is more work than use case demonstrations but gives much more complete information.

Proof of concept engagements used to be fairly rare. Only big companies could afford to run them because they cost quite a bit in both cash (most vendors required some payment) and staff time (to set up and evaluate the results). Even big companies would deploy POCs only to resolve specific uncertainties that couldn’t be settled without a live deployment.

The barriers to POCs have fallen dramatically with cloud systems and Software-as-a-Service. Today, buyers can often set up a test system with a just a few mouse clicks (although it may take several days of preparation before those clicks will work). As a result, POCs are now so common that they can almost be considered a standard part of the buying process.

Like the broader application of use cases, having more POCs is generally a good thing. But, also like use cases, POCs can be applied incorrectly.

In particular, I’ve recently seen several situations where POCs were used as an alternative to basic information gathering. The most frightening was a company that told me they had selected half a dozen wildly different systems and were going to do a POC with each of them to figure out what kind of system they really needed.

The grimace they didn’t see when I heard this is why I keep my camera off during Zoom meetings. Even if the vendors do the POCs for free, this is still a major commitment of staff time that won’t actually answer the question. At best, they’ll learn about the scope of the different products. But that won’t tell them what scope is right for them.

Anther company told me they ran five different POCs, taking more than six months to complete the process, only to later discover that they couldn’t load the data sources they expected (but hadn’t included in their POCs). Yet another company let their technical staff manage a POC and declare it successful, only later to learn the system had been configured in a way that didn’t meet actual user needs.

You’re probably noticing a dreary theme here: there’s no shortcut for defining your requirements. You’re right about that, and you’re also right that I’m not much fun at parties. As to POCs, they do have an important role but it’s the same one they played when they were harder to do: they resolve uncertainties that can’t be resolved any other way.

For Customer Data Platforms, the most common uncertainty is probably the ability to integrate different data sources.  Technical nuances and data quality are almost impossible to assess without actually trying to load each system.  Since these issues have more to do with the data source than the CDP, this type of POC is more about CDP feasibility in general than CDP system selection. That means you can probably defer your POC until you’ve narrowed your selection to one or two options – something that will reduce the total effort, encourage the vendor to learn more about your situation, and help you to learn about the system you’re most likely to use.

The situation may be different with other types of software. For example, you might to test q wide variety of predictive modeling systems if the key uncertainty is how well their models will perform. That’s closer to the classic multi-vendor “bake-off”.  But beware of such situations: the more products you test, the less likely your staff is to learn each product well.

With a predictive modeling tool, it’s obvious that user skill can have a major impact on results. With other tools, the impact of user training on outcomes may not be obvious. But users who are assessing system power or usability may still misjudge a product if they haven’t invested enough time in learning it.  Training wheels are good for beginners but get in the way of an expert. Remember that your users will soon be experts, so don’t judge a system by the quality of its training wheels.

This brings us back to my original claim.  Are marketers really getting better at buying software?  I’ll stand by that and point to broader use of tools like use cases and proof of concepts as evidence. But I’ll repeat my caution that use cases and POCs must be used to develop and supplement requirements, not to replace them. Otherwise they become an alternate route to poor decisions rather than
guideposts on the road to success.







Monday, April 27, 2020

Here's a Game about Building Your Martech Stack

TL;DR: you can play the game here.

I’ve recently been running workshops to help companies plan deployment of their Customer Data Platforms. Much of the discussion revolves around defining use cases and, in particular, deciding which to deliver first. This requires balancing the desire to include many data sources in the first release of the system against the desire to deliver value quickly. The challenge is to find an optimal deployment sequence that starts with the minimum number of sources needed for an important use case and then incrementally adds new sources that support new use cases. I’ve always found that an intriguing problem although I’ll admit few others have shared my fascination.

As coronavirus forces most marketers to work from home, I’ve also been pondering ways to deliver information that are more engaging than traditional Webinars and, ahem, blog posts. The explosion of interest in games in particular seems to offer an opportunity for creative solutions.

So it was fairly natural to conceive of a game that addresses the deployment sequence puzzle. The problem seems like a good candidate: governed by a few simple dynamics that become interestingly complex when they interact. The core dynamic is that one new data source may support new multiple use cases, while different combinations of sources support different use cases. This means you could calculate the impact of different sequences to compare their value.

Of course, some use cases are worth more than others and some sources cost more to integrate than others; you also have to consider the availability of the CDP itself, of central analytical and campaign systems, and of delivery system that can use the outputs. But for game purposes, you could simplify matters to assume that each system costs the same and each use case has the same value. This still leaves in place the core dynamic of balancing the cost of adding one system with the value of enabling multiple use cases with that system.

To make things even more interesting and realistic, you could add the fact that some use cases are possible with a few systems but become more valuable as new systems come online.  It might be that their data adds value – say, by making predictions more accurate – or because they enable delivery of messages in more channels.

In the end, then, you end up with a matrix that crosses a list of systems (data sources, CDPs, analytics, campaigns management, and delivery systems) against a list of use cases. Each cell in the matrix indicates whether a particular system is essential or optional for a particular use case. Value for any given period would include: the one-time cost of adding a new system; the recurring cost of operating active systems, and the value generated by each active use case.  That use case value would include a base value earned by running the use case plus incremental value from each optional system. Using red to indicate required systems and grey to indicate optional systems, the matrix looks like this:



The game play would then be to select system one at a time, calculate the value generated as the period revenue, and then repeat until you run out of systems to add.  Sometimes you’d select systems because they made a new use case possible, sometimes you’d select because they added optional value to already-active use cases, and sometimes you’d select a system to make possible more use cases in the future. Fun!

I then showed this to a professional game designer, whose response was “you may have found the least fun form factor imaginable: the giant data-filled spreadsheet. I'm kind of impressed.”

Ouch, but he had a point. I personally found the game to be playable using a computer to do the calculations but others also found it impenetrable. A version using physical playing cards was clearly impossible.

So, after much pondering, I came up with a vastly simplified version that collapsed the 19 systems in the original model into three categories, and only required each use case to have a specified number of systems in each category. I did keep the distinction between required and optional systems, since that has a major impact on the effectiveness of different solutions. I also simplified the value calculations by removing system cost, since that would be the same across all solutions so long as you add one system per period.


The result was a much simpler matrix, with just six columns (required and optional counts for each of the three system types) and the same number of rows per use case (22 in the example). I built this into a spreadsheet that does the scoring calculations and stores results for each period, so the only decision players need to make in any turn is which of the three system types to select. Even my game designer grudgingly allowed that it “made sense pretty quickly” and was “kinda fun”. That’s about all the enthusiasm I can hope for, I suspect.

I’ve put a working version of this in a Google spreadsheet that you can access here.

Go ahead and give it a play – it just takes a few minutes to complete once you grasp how it works (put a ‘1’ in the column for each period to select the class of system to add during that period). Most of the spreadsheet is write-protected but there’s a leaderboard if you can beat my high score of 1,655.

Needless to say, I’m interested in feedback. You can reach me through LinkedIn here.
Although this started as a CDP planning exercise, it’s really a martech stack building game, something I think we can all agree the world desperately needs right now. I also have worked out a physical card game version that would have a number of additional features to make games more interesting and last longer. Who wants to play?







Thursday, April 02, 2020

A Dozen Market Research Studies on COVID-19 Business Impact

This sums it up.  From Bank of America via Twitter but I can't find a link to the original.
As marketers finish their initial emergency adjustments to coronavirus lockdowns, they are starting to think about longer-term plans. While the shape of things to come is impossible to guess, reporting on industry changes has become a marketing trend of its own. Here are a dozen-plus studies I’ve seen in the past week, most of which are on-going.

Retail Behavior Data

Adobe this week launched their Digital Economy Index, a long-term project that gained unexpected immediate relevance. The index draws on trillions of Web visits tracked by Adobe systems to construct a digital consumer shopping basket tracking a mix of products including apparel, electronics, home and garden, computers, groceries, and more. The headline finding of the initial report would have been a continuing drop in prices driven by electronics, but this was overshadowed by short-term changes including a 225% increase in ecommerce from March 1-11 to March 13-15. Online groceries, cold medications, fitness equipment and computers surged, as did preordering for in-store pickup. Extreme growth was concentrated in hard-hit areas including California, New Hampshire and Oregon.

Customer Data Platform vendor Amperity reported a less rosy result in its COVID-19 Retail Monitor, which draws data from Amperity’s retail clients. They report that total retail demand fell by 86% by the end of March and even online revenue is down 73%.  Food and health products fell after an initial stock-up surge in mid-March.

Retail foot traffic tracker Placer.ai has packaged its in-store data in a COVID-19 Retail Impact tracker, which not surprisingly shows an end to traffic at shuttered entertainment and clothing outlets, near-total drop at restaurants, and mixed results for grocery stores and pharmacies. Results are reported by day by brand, if you really want to wallow in the gruesome details.

Grocery merchandising experts Symphony RetailAI have also launched a COVID-19 Insights Hub, which reports snippets of information with explanations. These range from obvious (consumers are accepting more product substitutions in the face of stock-outs) to intriguing (canned goods sales rose twice as much in the U.S. than in Europe because of smaller families and less storage space).

Retail Behavior Surveys

Showing just how quickly the world changed, retail consumer research platform First Insight found that the impact of coronavirus on U.S. shopping behavior doubled between surveys on February 28 survey and March 17. In the later survey, 49% of consumers said they were buying less in-store and 34% were shopping more online. Women and baby boomers went from changing their behavior slightly less than average in the first survey to changing slightly more than average in the second.

Ecommerce platform Yotpo ran its own survey on March 17, reaching 2,000 consumers across the U.S., Canada, and United Kingdom. They found consumers evenly split between expecting to spend more or less over-all, with a just 32% expecting to shift purchases online. Food, healthcare, and, yes, toilet paper were high on their shopping lists.

The situation was clearer by the end of March, when Retail Systems Research surveyed 1,200 American consumers for Yottaa. By this time, 90% were hesitant to shop in-store, 94% expected online shopping will be important during the crisis, and their top concerns were unavailable inventory, no free shipping, and slow websites. (Really, no free shipping?) More surprising but prescient, given Amazon's labor troubles: just 42% felt confident that Amazon could get their online orders delivered on time.

Media Consumption

Nobody wins any prizes for figuring out that Web traffic went up when people were locked down. But digital analytics vendor Contentsquare did provide a detailed analysis of which kinds of Web sites attracted more traffic (supermarkets, media, telecom, and tech retail) and which went down the most (luxury goods, tourism, and live entertainment) in the U.S., UK, and France. Week-by-week data since January shows a sharp rise starting March 16. Less easily predictable: supermarket and media conversion rates went down as consumers spent more time searching for something they wanted.

Media tracking company Comscore has also weighed in with an ongoing series of coronavirus analyses. Again, no surprises: streaming video, data, newscasts, and daytime TV viewing are all up. Same for Canada and India, incidentally.


You also won’t be shocked to learn that Upfluence found a 24% viewing increase in the live-streaming game platform Twitch in Europe. Consumption growth tracked national lockdowns, jumping in Italy during the week of March 8-14 and in France and Spain the week after.

Consumer review collector PowerReviews has its own data, based on 1.5 million product pages across 1,200 Web sites. Unlike Contentsquare, they found traffic was fairly flat but conversion rates jumped on March 15 and doubled by March 20. Their explanation is people were buying basic products that took less consideration.  People read many more reviews but submission levels and sentiment were stable. Reviews were shorter as consumers likely had other things on their minds.


Influencer marketing agency Izea got ahead of the game with a March 12 survey, asking social media consumers how they thought they’d behave during a lockdown. More social media consumption was one answer, with Facebook and Youtube heading the list. Izea also predicted that influencer advertising prices would fall as more influencers post more content.


Consumer Attitudes


Researching broader consumer attitudes, ITWP companies Toluna, Harris Interaction and KurunData launched a Global Barometer: Consumer Reactions to COVID-19 series covering the U.S., UK, Australia, India, and Singapore. The first wave of data was collected March 25-27.  People in the U.S. and India were generally more satisfied with how businesses had behaved and more optimistic about how quickly things would return to normal. But U.S. respondents ranked support from the national government considerable lower than anyone else.


Edelman Trust Barometer issued a ten market Special Report on COVID-19, although the data was gathered during the good old days of March 6-10. Even then, most people were following the news closely and 74% worldwide felt there was a lot of false information being spread. Major news outlets were the primary information source everywhere (64%) but the U.S. government was by far less relied upon (25%) than anywhere else (31% to 63%). Interesting, people put more faith in their employers than anyone except health authorities. They also expected business to protect their workers and local communities.


Kantar Media has yet another COVID-19 Barometer, although they reserve nearly all results for paying clients. The findings they did publish echo the others: more online media consumption, low trust in government, and expectation that employers will look after their employees. Kantar says that just 8% of consumers expect brands to stop advertising but 77% want advertising to show how brands are being helpful, 75% think brands should avoid exploiting coronavirus and (only?) 40% feel brands should avoid “humorous tones”.

Survey company YouGov publishes a continuously-updated International COVID-19 Tracker with timelines on changing opinions in 26 countries.  Behaviors including avoiding public places and not going to work change quickly; others such as fear of catching coronavirus and wearing masks move more slowly.  Other attitudes have barely shifted, including avoiding raw meat and improving personal hygiene.  The timing of changes correlates with the situation in each country.

Job Listings


There’s also an intriguing little niche of companies offering job information. PR agency Global Results Communications just launched a COVID-19 Job Board to help people find work.  So far, it's not very impressive: as of April 1 it had under 100 random listings from Walmart to Metrolina Greenhouses to the South Carolina National Guard.


Tech salary negotiators Candor (did you know that was a business?) has a vastly more useful site, listing 2,500+ companies that are reported to be hiring, freezing hiring, rescinding offers, or laying people off. At the moment, half the companies on the list are hiring. The site offers a very interesting break-down by industry: transportation, retail, consulting, energy, and automotive are in the worst shape. Defense, productivity and education software, and communications are doing the best.