Sunday, June 27, 2021

Where Do Low-Code and No-Code Fit in the Build vs Buy Debate?

I thought it might be my imagination, but Google Trends confirms that “build vs buy” really is coming up more often these days that it had in recent years. 

This surprises me, since I had thought that debate was over. It seemed that most organizations had accepted the default position of buying when possible and building only when necessary.

In the world of customer data management, I’d say the reason for the new interest in building systems is that corporate IT is more involved than before. Remember the growth in marketing and sales technologies was due mostly to the fact that marketing and sales teams could often buy what they wanted as a Software-as-a-Service subscription, with little or no involvement of the central IT team. Marketing and sales leaders had no interest in building software, and even martech and salestech groups were more oriented to selecting and implementing systems than building their own. But as customer experience rose to become a primary corporate priority, a trend accelerated by the pandemic, it became more important to integrate marketing and sales systems with the rest of the company. Privacy rules created a parallel pressure for corporate-level involvement. And corporate IT groups are much more likely to at least consider building their own system.

It’s a less obvious why interest in build vs buy should have also grown in the broader IT world. One possibility is that the growth of low-code and no-code options made building your own systems more attractive alternatives. I can’t really say the Google Trend data bear this out, since the dip in build vs buy interest happened after the initial growth in low-code development. The correlation is better for no-code although there are probably other, equally plausible explanations.

What’s more important is that no-code and low-code do change the terms of the build vs buy debate. One way to think about this is to break the software development process into four main components and see who does what under each approach. Those four components are:

  • requirements definition: this is identifying system goals and what functions it needs to reach those goals. It is (or should be) done by users regardless of the development approach.
  • process creation: this is the design and building of the system processes that meet the requirements. It’s the core of the development process. It’s done by IT under “build” and “low-code” approaches, by the users themselves under “no-code”, and by the software vendor under “buy”.
  • platform creation: the is the development and maintenance of the structures that support the system processes. It doesn’t really have a separate existence in “build” or “buy” approaches, where it’s tightly intertwined with process creation. But it’s distinctly separate in “no-code” and “low-code”, since it’s purchased from the software vendor rather than built by either users or IT. (Note: even “built” systems rely on purchased technology, such as operating systems and database engines. We can ignore that for present purposes.)
  • integration: this is connecting the system with other corporate systems. It’s always done by the IT department, which often argues it’s more work to integrate a purchased system than to connect a home-built one. (In many cases, they’re probably right.)

You can visualize the four development approaches as a column chart showing the distribution of work by department:

This diagram highlights several points worth considering:

  • users are always responsible for defining their requirements. That should go without saying, but there’s sometimes an assumption that a highly-rated package will have all the features they want, so there’s no need to define their requirements or check out the package in detail. In fact, it’s probably the most common mistake that people make when buying software. It applies equally to “no-code” systems, where a particular required feature may not be built into the system’s capabilities. Best to know that in advance.
  •  IT is always involved. If nothing else, they’ll do the integration work. In practice, they’ll also be vetting any purchased system against security, privacy, reliability, and other standards. This is another thing that should go without saying, but they should be part of the any process from the start.
  • The real battle is over process creation. This is where either users, vendors, or IT may end up doing the work. The key question then becomes which group is best equipped to handle this task for any particular project. If the processes are mostly limited to individual users, who may each have their own preferred way of doing things, then letting the users define their own process with no-code makes sense. If the processes are shared by many users and fairly standard for the industry, a purchased package will likely work the best. If the processes are unique to the company, but not exceptionally odd, a low-code package should work. If the processes require capabilities that won’t be well supported in a low-code package, then custom-building the system is probably the best choice. 

This thinking is summarized in the matrix below, which proposes that the appropriate choice depends on two binary variables: process variety (is the process the same for all users or do they each want their own) and process difficulty (either defined in terms of absolute complexity, or of how unique the process is to the company compared with others in the industry). 


The underlying logic of the matrix is that packaged systems, whether no-code or bought, make the most sense when the underlying processing is simple or common to the industry, while IT involvement via low-code or build is required to create complex or unique processes. When IT is involved, the productivity advantages of low-code are most important where individualized processes mean a great deal of customization is needed. The productivity advantages of low-code are less critical where there is one shared process to create, and any limits on the complexity imposed by low-code may be more important. (I’m not entirely convinced this matrix is correct, but it’s a good starting place for discussion.)

  • The magnitude of each component has an unclear role in the choice. It might seem that projects with very complex integration requirements should be built by IT, while those with extensive user requirements should let users create their own no-code solutions. But if a purchased platform supports the integration needs, their complexity doesn’t matter. Similarly, if the extensive user requirements are well understood and stable, it’s a waste of time for users to build them for themselves. If anything, extensive and stable user requirements would favor a purchased system, assuming the system is already meets them.

Beyond these specific propositions, I think it’s generally helpful to break a project into these components when thinking about the right development approach. They’re all important and are different enough to treat separately to ensure they’re all considered. If you have other thoughts on how to apply them, or can offer a better approach, please share your comments.

Friday, May 07, 2021

Seers Offers Easy-to-Use Cookie Consent

The growth in global privacy regulation has created an immense headache for thousands of businesses – and, thus, an immense opportunity for systems that offer relief. Small businesses in particular need simple, low-cost solutions to comply with rules that require gathering consumer consent to data collection and giving consumers access to data that’s been collected. An ideal small-business solution leads users through the set-up process without requiring technical skills or expertise in privacy rules. (Come to think of it, so does an ideal large-business solution.)

Seers is one of many vendors addressing this market. Its core products are systems to collect cookie consent and data access requests. It supplements these with products for access request fulfillment, data privacy impact assessments, GDPR compliance assessment, data breach reporting, policy creation, data discovery, and on-demand privacy training. Several of the products are engineered to support mid-size and large enterprises as well as small business.

Set-up of the Seers consent manager follows step-by-step process.  It starts with the user specifying the domain to be supported. The system then automatically scans this domain to identify the cookies and scripts currently installed. It will later compare the results to a list of hundreds of cookies that Seers has evaluated and classified, and generate a list of the specific cookie consent requests the site must present to visitors. Before this step, users set preferences for the cookie banner appearance, cookie policy URL, treatment of unconsented visitors, and other options. The interface includes handy explanations of each item so that users can make informed choices. The system will detect site visitor location and can use this to present consents in 30 local languages and in different formats for GDPR, California’s CCDP, and Brazil’s LGPD.

Once settings are complete, Seers will generate code to insert in the user’s Web site to deploy the consent system. It will then keep track of consents as these are received, providing an audit trail should documentation be needed. The system will regularly rescan the user’s Web site to identify the cookies currently in use and adjust the cookie consent table accordingly. A limited free version is available and the full module starts at $9 per month for a single Web domain.

Seers’ other main customer-facing module is Subject Request Management, which offers a portal that lets customers ask to see, change, or delete data a company holds about them. This is similarly easy to configure, letting users control the appearance, identify verification requirements, and other options. It feeds requests into a queue which lets users manually assign them to departments and individuals for resolution, tracks their status, and stores notes and attachments. Again, a limited free version is available while a single-user full version costs just under $50 per year.

Seers also offers a large number of interactive templates, assessments, and policy generators. These lead users through processes including data privacy impact assessment (DPIA), privacy policy creation, and GDPR compliance assessment. The ones I saw were all easy to follow and included impressive amounts of information. The privacy impact assessment module is priced at $46.99 per year while most of the other tools are bundled into a package starting at $129.99 one-time fee.

So far so good. I really liked what I saw from Seers. But there are gaps in its product line that mean most companies would need additional products for a complete solution. The company is closing one gap with a data discovery tool, now in beta and set for July release, which will let users build an inventory of personal data is stored in its systems. The first release, at least, will be limited to having users review field names and mapping these to standard categories. One nice touch is that the inventory will connect with data subject requests, so the system will be able to automatically pull information about an individual. But field names are not an entirely reliable source of information and Seers does not have the data scanning capabilities of a system like BigID.

Other gaps include consent management beyond cookie consent; records of processing activity (ROPA) reports; ensuring that processing is legally justified; monitoring vendors who process company data; and automatic policy updates as rules change. Whether you need these depends on what other resources you have available. But they’re all required under current privacy regulations.

Sunday, May 02, 2021

Build vs Buy Your Customer Data Platform?

The build vs buy debate has existed as long as packaged software itself. Any serious discussion quickly concludes that there’s no one right answer and real question is when to do one or the other. That discussion, in turn, usually leads to a recommendation that companies build software which will create unique competitive advantage and otherwise buy when a satisfactory option exists. The implicit assumption behind that recommendation is that buying is cheaper than building. This isn’t always true but it will apply in most cases, especially when cost calculations include staff cost, on-going maintenance and feature updates, the risk of project failure or under-performance, and the opportunity cost of not using scarce developers to build other systems that do create unique advantages. 

But the discussion changes when Customer Data Platforms are involved.* I’ve recently heard pro-build arguments that raise other valid issues. The two big ones are: 

  • most of the work in deploying a CDP is in data collection, which includes identifying source systems, understanding their contents, deciding which elements to include, and defining transformations to make the data usable. This work is the same whether you’re building or buying. Since it accounts for the bulk of the project cost, the cost to build or buy can’t be very different. 
  • many companies (especially big ones) already have systems that do much of what a CDP is intended to offer. In these situations, the incremental cost to extend existing systems will be less than the cost of adding a separate CDP, which will unnecessarily (and expensively) duplicate many existing functions. 

Both arguments attack the “buying is cheaper” assumption.  Neither should be summarily dismissed. Rather, let’s fit them into a larger framework that looks at more factors to consider in a CDP build vs buy decision. To make things manageable, this framework identifies items that are the same for both, favor build, and favor buy, and groups them based on whether they apply to data collection, processing, or outputs. The table below summarizes my list; I’m sure there are others. 

  same for both
build advantage buy advantage
data collection source analysis existing connectors prebuilt connectors
processing define requirements less redundancy,
incremental development,
custom features for
competitive advantage
prebuilt processes,
more mature,
continuous updates,
lower risk
outputs define requirements existing connectors prebuilt connectors

Exploring this in more detail: 

  • Data Collection/Same: as already mentioned, much of the work in assembling a CDP is understanding source data.  This is required regardless of whether the CDP is built or bought. If  the data is well understood, a purchased CDP benefits as much as a built system – so long as IT staff who understand the data are available to the project. 
  • Data Collection/Build Advantage: a built CDP will take advantage of whatever connectors have already been created to feed existing systems. Note that any relative advantage is diminished if a purchased CDP can also use these connectors, either to create direct feeds or by reading data the connectors have pulled into existing data lakes or warehouses.  That should be true in most cases.
  • Data Collection/Buy Advantage: a purchased CDP will have prebuilt connectors for many source systems and often a standard API for creating new connectors. The value of this depends on how many of your company’s systems are covered, which is likely to depend on how modern they are. 
  • Processing/Same: both built and purchased systems depend on effective requirement definition, another critical task that can consume substantial project resources. There may be some additional work bringing vendor staff up to speed for a purchased system, compared with having a system built by internal staff who already understand the business. But this is probably balanced by CDP vendor staff having deeper experience with CDP-specific issues. 
  • Processing/Build Advantage: extending existing systems may mean that existing data stores can expanded, rather than copying data into a separate CDP database. This is especially important for companies with massive data volumes. But the actual advantage depends on the technical details, since the CDP often needs data placed into a different format from existing systems, in which case both build and buy solutions will require a new data store. 

A built system will only add features that are not already available in existing systems, so there’s less potential redundancy compared with a purchased system. This could mean lower operating costs but, again, it depends on how much of what the CDP does is really new.  And there's a reasonable chance a purchased CDP will actually reduce operating costs by enabling the company to sunset some existing systems or processes.

A built system can include features that are not available in a purchased CDP, creating unique competitive advantage. How much this matters will depend on how unique the company’s requirements really are, and whether a purchased CDP can also be extended to meet them. CDP vendors would argue that their systems are extremely flexible and extensible. 

  • Processing/Buy Advantage: a purchased CDP will have core CDP processes already built, saving the cost of custom development. This is probably the strongest argument for a purchased system.  Of course, it depends on how many new processes are needed and how hard it would be for the company to build them on its own. 

A purchased system will also include advanced features that wouldn’t be delivered in early versions of a built system, which will inevitably focus on meeting basic requirements as quickly as possible. It could easily take years for the in-house system to catch up with the refinements of a mature purchased product, and the purchased product will also be improving during that time. There’s a reasonable argument that a purchased CDP is likely to add features even before any particular company knows it needs them, in which case it would be impossible for the built CDP to ever meet user needs as quickly as the purchased product. One caution: purchased CDPs themselves vary greatly in their maturity, so this will apply more to some than others. 

Build and buy choices both have their risks. There’s always a chance that a purchased system won’t perform as expected, won’t evolve to meet future needs, or will be discontinued if its developer runs into business problems. But these risks can be limited through careful vendor selection and contracts. By contrast, development failures for custom software are almost the norm: industry lore is filled with high-priority projects that ran over time and over budget and still failed to meet expectations. The risk is greater for systems like CDPs, which have requirements that are less familiar to many corporate IT groups than operational systems like order processing or CRM. So, on balance, I think it’s fair to say that built solutions are higher risk – even though I realize that many in-house IT teams would disagree. Perhaps we can all agree that this is something to be assessed on a case-by-case basis. 

In making all these assessments, it’s important to look at the full scope of long-term CDP requirements. It might be relatively easy to extend existing systems to meet a handful of initial requirements, but there would then be a backlog of further enhancements that would each require additional investments. A purchased CDP should deliver a much broader set of features from the start and within its original purchase price. 

  • Outputs/Same: again, the work to define output requirements will be pretty much the same whether a system is built or bought. 
  • Outputs/Build Advantage: existing systems may have connectors in place to deliver outputs to company reporting, marketing, messaging, analytical, and other systems. This is especially helpful if the targets are legacy systems that are difficult to work with. As with inputs, a purchased CDP should also be able to take advantage of many existing connectors, so the net advantage for built systems is limited. 
  • Outputs/Buy Advantage: as with data collection connectors, purchased CDPs will have a library of prebuilt connectors for output systems. This could save considerable effort, especially if the CDP project requires large numbers of connections that don’t already exist and the CDP vendor can provide them. 

Summing all this up: some issues, such as data preparation, are less relevant to the build/buy choice than it might seem. The main factor driving the decision is the incremental work needed to build and maintain an internal solution compared with the cost of adding a purchased system. If existing systems can meet CDP requirements with relatively few changes, a built solution makes sense. If a major development project is needed, it’s probably better to buy. Because CDPs are inherently flexible, it’s unlikely that a built solution will truly provide any competitive advantage that a purchased CDP cannot duplicate with the same or less development effort. 

One important caveat to all this is that build vs buy is less a choice than a continuum. Even built solutions rely heavily on purchased components, such as data storage platforms, function libraries, and external services (e.g. third party identity resolution).  Many purchased CDPs use exactly the same tools. To the degree that builders can rely on purchased tools, they get the same benefits of using pre-built components that they would get from a purchased CDP. 

The tools available for purchase continue to improve: platforms like Google Cloud keep adding new CDP-supporting services; databases like Snowflake make it easier to manage CDP data structures; applications like Rudderstack and Informatica provide complex process flows.  But assembling a functional CDP will never be as simple as snapping these together like the proverbial Lego Blocks.  Then again, deploying a CDP also takes more than just plugging it in. 

What matters is that the tools keep getting better, meaning the cost of building is reduced. At the margin, this shifts the balance towards built systems, at least for companies with the resources to use the tools effectively. But in many cases – perhaps the vast majority – a purchased CDP still makes the most sense. 

In other words: it depends.

_____________________________________________________

* CDP Institute defines a CDP as "packaged software that creates a persistent, unified customer database that is accessible to other systems"."  This means that, strictly speaking, there is no such thing as a custom-built CDP.  But we'll use "CDP" here to refer to any system that performs the functions listed in the definition: "creates a persistent, unified customer database that is accessible to other systems." In practice, many home-built "CDP" systems won't be fully accessible to other systems, either.  We'll ignore that here but note that ease of connecting with new systems is one of the advantages of buying rather than building.

 

Tuesday, March 16, 2021

ActiveNav Automates Data Inventory Updates

Our on-going tour of privacy systems has already included stops at BigID  and Trust-Hub, which both build inventories of customer data. Apparently I’m drawn the topic, since I recently found myself looking at ActiveNav, which turns out to be yet another data inventory system. It’s different enough from the others to be worth a review of its own. So here goes.

Like other inventory systems, ActiveNav builds a map of data stored in company systems. In ActiveNav’s case, this can literally be a geographic map showing the location of data centers, starting from a global view and drilling down to regions, cities, and sites. Users can also select other views, including business units and repository types. The lowest level in each view is a single container, whose contents ActiveNav will automatically explore by reading metadata attributes such as field names and data formats.

The system applies rules and keywords to the metadata to determine the type of data stored in each field, without reading the actual file contents. (A supplementary module that allows content examination is due for release soon.)  It stores its findings in its own repository, again without copying any actual information – so there’s no worry about data breaches from ActiveNav itself.

One disadvantage of ActiveNav’s approach is that relying only on metadata limits the chances of finding sensitive information that is not labeled accurately, something that BigID does especially well. Similarly, ActiveNav doesn’t map relations between data stored in different containers, so it cannot build a company-wide data model. This is a strength of Trust-Hub.

Still, ActiveNav’s ability to explore and classify data repositories without human guidance is a major improvement over manually-built data inventories. Its second big benefit is a “data health” score based on its findings. This is calculated for each container with scores for factors including: risk, including intellectual property and security issues; privacy compliance, based on presence of IDs and other data types; and data quality, including duplicate, obsolete, stale and trivial contents. Scores for each container are combined to create scores for repositories, locations, business units, and other higher levels. This gives users a quick way to find problem areas and track data health over time.

ActiveNav addresses what may be the biggest data inventory pain of all: keeping information up-to-date. The system automates the update process by receiving continuous notifications of metadata changes from systems that are set up to send them. In other cases, ActiveNav can query repositories to look for metadata that has been updated since its last visit. Of course, this requires providing the system with credentials to access that information.

ActivNav was founded in 2008. Until recently, it offered only a conventional on-premise software license with one-time costs starting around $100,000. This is sold this primarily through partners who work on data management projects for heavily regulated industries and governments. The company has recently introduced a SaaS version of its data inventory system that starts at $10,000 per year. It also offers data governance and compliance modules.

Wednesday, February 17, 2021

Is Peak Martech Approaching At Last?


Contrary to popular belief, forecasting is easy: tomorrow is nearly always like today. What’s hard is predicting when something will change: a snowstorm, stock market crash, or disruptive technology. Of course, predicting change is also where forecasting is most useful.

In marketing technology, we’ve seen a long succession of sunny days. Every year, the number of systems grows, fed by a proliferation of channels, declining development costs, and easily available funding. The safe bet is the number will grow next year, too. But I think one day soon the pendulum will reverse direction.

I can’t point to much data in support of my position. Surveys do show that marketers don’t use the full capabilities of their existing stacks, which might mean they’re inclined to take a break before making new purchases. But marketers have never used every feature in their old systems before buying new ones. The pandemic probably led to a temporary dip in martech purchases but budgets appear to be opening up again. So the appetite for new martech will likely reappear.

My prediction is based less martech trends than a general impression that many people feel the world is spinning out of control and want to rein it in. Tech in particular is having impacts that no one fully understands. Concerns about disinformation, social media-induced radicalization, lost privacy, and biased artificial intelligence are all part of this. Even in the narrower spheres of martech and adtech, many users feel their systems are too complicated to really understand. Worries about ad fraud, ads appearing in the wrong places, inappropriate personalization, and unintended campaign messages all come down to the same thing: people worry their systems are making unseen bad decisions.

Technologists tend to feel the cure is more technology: smarter AI, systems checking on other systems, and democratized development to let more people build systems for themselves. But there’s an air of hubris to this. Stories from Daedalus to Frankenstein to Jurassic Park warn us advanced technology will ultimately destroy its creators. Every data breach and wifi outage reminds us no technology is entirely reliable and fixing it is beyond most people’s control.

As a result, non-technologists increasingly doubt that technology can solve its own problems. Some people will bury their worries, accepting technology’s risk as the price for its benefits. Others will take the opposite extreme, rejecting technology altogether, or at least to the great degree possible.

But there’s a middle ground between blindly surfing the net and leaving the grid entirely. This is to consciously seek technology that’s simpler and more controllable than current extremes, even if it’s also less powerful as a result. The key is willingness to make that trade-off, which in turn implies willingness to invest the effort needed to assess the relative value vs. risk of different technical options.

Making that investment is probably the biggest change from the current default of accepting technical progress as inevitable and trusting the technologists to appropriately balance risk against rewards when they decide which products to release. In many cases, the cost of assessment will probably be higher than the cost of using the diminished technology itself: that is, the difference in value between a more secure system and a less secure one may be less than the value of the time I spend comparing them. This means the main cost of making this adjustment isn’t the lost value from using safer technology, but the cost of assessing that technology.

In theory, the assessment cost might be reduced by splitting it among many people who would share their results. But here’s where trust comes back into play: if you can’t trust someone else to do accurate research, you can’t decide based on their results. Since loss of trust is arguably the defining crisis of today’s society, you can’t just wave it away with an assumption that people will trust others’ assessments of technology tradeoffs. Rather, the need will be to build technology that is self-evidently understandable, so that each person can assess it for herself. This will reduce the assessment cost that blocks them from choosing simpler solutions.

So here’s where I think we’re headed: away from ever-increasing, and increasingly opaque, technical complexity, and towards technology that’s simpler and more transparent. Remember: simplicity is the goal, and transparency is what makes it affordable. I call this the “new pragmatism”, although I doubt the label will catch on. As the word “pragmatism” suggests, it’s rather boring and a lot of hard work. But compared with the chaos or authoritarianism that seem to be the main alternatives, it’s about the best way we can hope our current chapter will end. After we turn the page, people may later learn to rebuild the presumption of trust that enables non-verifiable relationships.

If you’re still reading this, thanks for your indulgence; I know you don’t come to this blog for half-baked social theories. But these ideas do have direct implications for marketing and martech. If I’m right, both consumers and martech buyers will want simpler, more transparent products. For marketers, this means:

  • The time may finally have come when stripped-down versions replace feature-rich products, with a stress on ease of use rather than power. I know this idea has been tried before without success. But that was during the earlier age of techno-optimism.

  • Buyers may be more interested in products whose actual operation is transparent. This will usually mean status indicators, meters, and diagnostics to show’s happening. In some cases may literally mean see-through designs that let users watch, say, as the dishes are cleaned or the vacuum bag fills with dirt. Whatever it takes for a feeling of control.

  • Privacy will continue to gain importance, with particular emphasis on systems that are private by design rather than user choice. Privacy policies and options are poorly understood and mistrusted, so many consumers would rather buy a system that makes them unnecessary because it can’t collect data or connect to the internet. Of course, they need to be confident the system behaves as promised.

  • Marketing messages should also switch from promoting advanced technology to promoting simplicity, reliability, and clarity. Explanations about what’s inside a product, in terms of the technology, design and manufacturing processes, materials, and people may be more important to buyers looking for reasons to trust.

  • Marketing methods should match the claims, avoiding unnecessary personalization and staying away from mistrusted media. This is a tricky balance because few marketers will want to sacrifice the performance benefits that come from data-driven targeting. But they do need to weigh long-term brand value against short-term campaign results. For what it’s worth, relying more on basic branding and less on advanced technology is itself consistent with the return to simplicity.
  • Martech vendors will want to make all these adjustments in their own marketing. Other considerations include:


    • Artificial intelligence must be understandable. It’s tempting to suggest discarding AI altogether, since it may be the ultimate example of complicated, opaque, and ungovernable technology. But the apparent benefits of AI are too great to discard. The pragmatic approach is to demand proof that AI really delivers the expected benefits. Then, assuming the answer is yes, find ways to make AI more controllable. This means building AI systems that explain their results, let users modify their decisions, and make it easy to monitor their behaviors. These are already goals of current AI development, so this is more a matter of adjusting priorities than taking AI in a fundamentally different direction.

    • Reconsider the platform/app model. This may be blasphemy in martech circles, since the martech explosion has been largely the result of platforms making it easier to sell specialized apps. But the platform/app model relies on trust that apps are effectively vetted by platform owners. If that trust isn’t present, assessment costs will pose a major barrier to new app adoption. At best, buyers with limited resources (which is everyone) would be able to afford fewer apps. At worst, people will stop using apps altogether. So the pragmatic approach for platforms and app developers alike is to work even harder at trust-building. We already see this, for example, in Apple’s new requirements for data privacy labels and tracking consent rules. https://developer.apple.com/app-store/user-privacy-and-data-use/ What Apple hasn’t done is to aggressively audit compliance and publicize its audit programs. The dynamic here is that users will make more demands on platforms to prove they are trustworthy and will concentrate their purchases on platforms that succeed. Since selecting a platform carries its own assessment cost, we can expect users to deal with fewer platforms in total. This means the trend for every major vendor to develop its own platform ecosystem will reverse. Looking still further ahead: fewer platforms gives the remaining platforms have more bargaining power with the app developers, so we can expect higher acceptance standards (good) and higher fees (not so good). The ultimately is fewer app developers as well.

    • Rebirth of suites. That’s not quite the right label since suites never died. But the point is that buyers looking for simplicity and facing higher assessment costs will find suites more appealing than ever. Obviously, the suites themselves must meet the new standards for simplicity, value and transparency, so integrated-in-name-only Frankensuites don’t get a free pass. But once a buyer has decided a suite vendor is trusthworthy, it’s far more attractive to use a module from that suite than to assess and integrate a best-of-breed alternative. Less obvious but equally true: building a system in-house also becomes less attractive, since in-house developers will also need to prove that their products are effective and reliable. This will necessarily increase development costs, so the build/buy balance will be tilted a little more towards buying – especially if the assessment costs of buying are minimal because the purchased option is part of a trusted suite. It’s true that this doesn’t apply if companies require users to accept whatever their in-house developers deliver. But that doesn’t sound like a viable long-term approach in a world where the gap between poor in-house systems and good commercial products will be larger than ever.

    • Limits on citizen developers. If blasphemy comes in degrees, this takes me to the professional-grade, eternal-damnation level. On one hand, nothing is more trusted than something a citizen developer creates for herself: she certainly knows how it works and can build in whatever transparency and monitoring she sees fit. So the new-pragmatic world is likely to see more, not fewer, user-built systems. But if we’re learned anything from decades of using Excel, it’s that complex spreadsheets almost always contain hidden errors, are opaque to anyone except (maybe) the creator, and are exceedingly fragile when change is required. Other user-built solutions will inevitably have similar problems. So even if users trust whatever they’ve built for themselves, everyone else in the organization will, and should be, exceedingly cautious in accepting them. In other words, the assessment cost will be almost insurmountably high for all but the simplest citizen-developed applications. This puts a natural, and probably shrinking, limit on the ability of citizen-developed systems to replace commercial software or in-house systems built by professional developers. In practice, citizen development will be largely limited to personal productivity hacks and maybe some prototyping of skunkworks projects. This doesn’t mean that no-code and low-code tools are useless: they will certainly be productivity-enhancers for professional developers. Don’t sell those Airtable options just yet.

    I’ll caution again that the picture I’m drawing here is far from certain to develop. I could be wrong about the change in social direction – although the alternative of continued disintegration is ugly to contemplate. Even if I’m right about the big shift, I could be wrong about its exact impact marketing and martech. Still, I do believe that current trends cannot continue indefinitely and it’s worth considering what might happen after their limits are reached. So what I’ll suggest is this: keep an eye out for developments that fit the pattern I’m suggesting and be ready with suitable marketing and martech strategies if things move in that direction. 

    *                *                 *

    Addendum: The core argument of this post is “people feel the world is spinning out of control and trust will solve that problem”. That feels like a non sequitur, since it’s not obvious how trust creates control. It also feels uncomfortably hierarchical, and perhaps elitist, if “control” implies a central authority.  (Note: you might read “control” as referring to people controlling their own personal technology and data. But fully self-sovereign individuals can still cause chaos if there’s not some larger control framework to constrain their actions.)

    But it's not a non sequitur because there is in fact a clear relationship between trust and control. Specifically:

    • Trust can be defined as the belief that someone will act in the way you want them to
    • Control is a way to force someone to act in the way you want. 
    • Thus, trust and control are complementary: the greater trust you have in someone, the less control you need over them (to still ensure they act the way you want).
    Although power-hungry people might enjoy control for its own sake, most people will care only about achieving the desired result. So the solution to a world “spinning out of control” isn’t necessarily reinstating hierarchical, elite authority; it can also be generating trust.  Both yield the same outcome of predictable desired behaviors. 

    This applies in particular to the discussion of citizen development and no-code software, which seems to imply that applications can only be used by more than one person if there’s a central authority to coordinate and approve them.  This is where "governance" comes in.  It's correct that self-built software needs to meet certain standards to be safely used and shared.  But "governance" can be achieved either through control (a central authority enforces those standards) or through trust (convincing users to apply those standards by themselves).  Either approach can work but trust is clearly preferable.

     

    Wednesday, January 27, 2021

    Lego Blocks, Pickup Trucks, and Why Bloomreach Bought Exponea


    Yesterday brought news that CDP Exponea had been purchased by ecommerce recommendation engine Bloomreach. The deal almost exactly parallels last year’s merger between RichRelevance and Manthan, as well the smaller-scale combination of CrossEngage with Gpredictive. It also recalls other recent CDP acquisitions including Acquia buying AgilOne, Chapsvision buying NP6, SAP buying Emarsys, and Wunderkind buying SmarterHQ.

    It’s easy (and correct) to see these deals as efforts to assemble comprehensive marketing suites. But it's not just that the buyers want to add a CDP their collection.  These deals all involve CDPs with marketing automation functions (that is, segmentation, message selection, campaigns, personalization, and cross-channel orchestration). CDP Institute labels these as “campaign" or "delivery”; others sometimes refer to them as “activation” or “execution” CDPs. This type of CDP provides the biggest headstart towards building a marketing suite. The drive to build suites clarifies why predictive analytics vendors Bloomreach, RichRelevance, and Gpredictive are such frequent partners: stand-alone predictive tools are missing nearly all the features needed for a full marketing platform, so they have the most to gain from buying a CDP that fills those gaps.

    Of course, the biggest CDP acquisition of all, Twilio’s purchase of Segment, doesn’t fit this mold. Segment was more of a pure-play or "data" CDP, limited to assembling and sharing customer profiles. But Twilio isn’t looking to build a marketing suite; their core business is call centers and (after buying SendGrid) email messaging. They have their sights set on providing a communications layer to support all customer-facing operations, including sales and service. Still a suite, but a different kind.

    The drive to construct comprehensive marketing suites is interesting because it conflicts with the current notion that marketers don’t want big, integrated products but instead want to create their own collections of components, building some parts with the latest self-service tools and connecting the rest through microservices, open APIs, and other technical wizardry. The pure vision is a distributed, non-hierarchical architecture, modeled roughly on the Internet itself, where any system can connect with any other system. The more practical vision is a platform-centric world where any system can plug into a central platform that provides basic services. In both visions, companies construct their own, highly customized collections of systems that are perfectly tailored to their needs.

    Simply put, the vendors assembling these suites are betting that vision is wrong. They believe – based no doubt on what buyers are telling them – that companies still want to buy an integrated product that meets their needs without any assembly required. The purely practical reason is that companies don’t assemble systems for their own sake; they assemble them as tools to do what’s really important, which is to make money (usually by delivering goods and services to customers). Sure, you can build a pickup truck from Lego blocks and you might even do that for fun.  But if you actually need to haul something, you’ll go to a dealer and buy one.

    In other words, we still live in a world ruled by Raab's Law, which is “Suites win”. (More formally: In the long run, suites always win the competition between suites and best-of-breed systems.) Platforms don’t change this as much as you’d think, because customers always want the platforms themselves to add more features and make them tightly integrated. It's true that buyers want third-party applications that can extend platform capabilities.  It's also in the platform vendors’ interest to be open to those applications since they add value to the platform at little cost to the platform owner. But there’s a time and effort cost to the user of selecting, connecting, and learning to use each new application, regardless of whether the app is “free” or how easily it connects. Users are very aware of these costs, which is why they want the core platform to offer as many features as possible. Put another way: the value of applications is they enable users to add features a platform lacks; but the more features a platform provides internally, the more value it provides from the start. This pushes platform vendors to add features that save users from the need to install apps. Of course, the art of platform management is knowing which features are popular and standardized enough to be incorporated.

    As new features are added, platforms increasingly resemble integrated suites. The significant difference from suites is that platforms offer users the option to replace the platform’s default components with external alternatives. But if the platform builders do their jobs correctly, users will find less need to do that over time.

    This is what makes campaign CDPs so attractive to companies attempting to construct a new marketing suite. The marketing features of the CDPs provide a core of functionality that marketers are looking for. In addition, and crucially, the core CDP features make it easier for the suite vendor to integrate components of its own system and also enable the vendor to offer platform-style flexibility by connecting with external systems.

    What, then, do these acquisitions tell us about the future of the CDP industry? The first thing to realize is that most CDPs already fall into the campaign and delivery categories (70% of the industry, measured on company count or employment, according to our statistics).  Most of these firms actually started as marketing automation, personalization, or delivery systems and added CDP capabilities later. Some already provide an integrated marketing suite; the others can expand in that direction on their own or through combinations with other products.  

    It will be increasingly difficult for this type of CDP to survive without a broad set of marketing functions. Competitive pressures will force them to improve those features while treating core CDP capabilities of building and sharing unified profiles as just one talking point.  We've already seen limit their investment in CDP features and instead partner with other data-oriented CDPs to meet those needs.  (We also expect these firms to increasingly specialize by industry and company size. This makes it easier for them to build connectors to operational systems such as point of sale in retail or reservations in travel, as well as building industry-specific features, hiring industry-expert staff, and fine-tuning delivery and pricing models to meet target price-points.)

    The other 30% of the CDP industry are vendors specializing in data management and analytics. We uncreatively call these "data" and "analytics" CDPs.  Many started life as tag managers, data collection, or predictive modeling systems; others were built as CDPs from the begining. As the Twilio/Segment deal illustrated, data CDPs may also be acquisition targets, especially for companies that are aiming to build a corporate-level backbone rather than a marketing suite.  Firms that aren't acquired will be able to remain independent by offering best-of-breed customer data unification services to companies that need and can pay for a best-of-breed solution. These will likely be large enterprises. This type of CDP will increasingly be purchased by IT and data departments, rather than marketing, and will come to look more like IT tools than end-user applications. As such, they’ll find themselves increasingly competing with general purpose data management tools from other software providers and from data management and analytics tools built into the big cloud platforms (Google Cloud, AWS, Azure). So far, the specialized features of the most sophisticated data CDPs are more advanced than what’s available elsewhere. Some of these vendors will continue to innovate and ultimately emerge as strong leaders in this segment. Others will probably withdraw into niches or sell themselves to other companies that want to jumpstart their own CDP offerings.

    One happy byproduct of these developments may be a final end to the theological debate over the proper definition of “Customer Data Platform”. As the campaign and delivery CDPs position themselves as marketing suites or platforms, they’re likely to move away from CDP as their primary label. But they’ll still need the world to know that they offer the core CDP capabilities of unified profile creation and sharing. With any luck at all, they’ll handle this by labeling those features as "CDP" when they describe their system capabilities. This should eventually lead to a more consistent use of the CDP term throughout the market and, thus, less confusion over what it means. The data and decision CDPs already define CDP in terms of the same core capabilities. Some of those firms are today pulling away from verbal confusion by labeling themselves as “infrastructure” or “pipeline” customer data platforms. If the narrower definition of CDP reasserts itself, they may come back to adopting the CDP label itself.


    Sunday, January 03, 2021

    Software Has Stopped Eating the World

    This August will see the tenth anniversary of Marc Andreessen’s famous claim that software is eating the world. He may have been right at the time but things have now changed: the world is biting back.

    I’m not referring to COVID-19, although it’s fitting that it took an all-too-physical virus to prove that a digital bubble of alternate facts could not permanently displace reality. Nor am I juxtaposing the SolarWinds hack with the unexpectedly secure U.S. election, which showed a simple paper trail succeed while the world’s most elite computer security experts failed.

    Rather, I’m looking at the most interesting frontiers of tech innovation: self-driving vehicles, green energy, and biosciences top my list. What they have in common is interaction with the physical world. By contrast, recent years haven’t seen radical change in software development. There have certainly been improvements in software, but they’re more about architectures (cloud, micro-services) and self-service interfaces than fundamentally new applications. And while most physical-world innovations are powered by software, the importance of those innovations is that they are changing physical experiences, not that they are replacing them with software-based virtual equivalents.

    Even the most important software development of all – artificial intelligence – measures much of its progress by its ability to handle physical-world tasks such as image recognition, autonomous vehicle navigation, and recognizing human emotion. Let’s face it: it’s one thing for a computer to beat you at Go, but quite another for it to beat your dance moves.  Really, what special talent is left for humans to claim as their own?

    The shift is well under way in the world of marketing. One of the more surprising developments of the pandemic year was the boom in digital out-of-home advertising, which includes outdoor billboards and indoor signage. The growth seemed odd, given how much time people were forced to spend at home. But the industry marched ahead, spurred in good part by increased ability to track devices as they move through the physical world. It’s a safe bet that out-of-home ads will grow even faster once people can move about more freely.

    Indeed, the industries hit hardest by the pandemic – travel and events – also show that virtual experiences are not enough. Whatever their complaints before the pandemic, almost everyone who formerly traveled for business or attended business events is now eager to return to seeing people and places in person. The amount of travel will surely be reduced but it’s now clear that some physical interaction is irreplaceable.

    In a similarly ironic way, the pandemic-driven boost to ecommerce has been accompanied by a parallel lesson in the importance of physical delivery. Almost overnight, fulfillment has gone from a boring cost center to a realm of intensive competition, innovation, and even a bit of heroism. Software plays a critical role but it’s a supporting actor in a drama where the excitement is in the streets.

    Still closer to home for marketers, we’ve seen a new appreciation for the importance of customer experience, specifically extending past advertising to include product, delivery, service and support. If the obsession of the past decade has been targeted advertising, the obsession of the next decade will be superior service. This ties into other trends that were already under way, including the importance of trust (earned by delivering on promises through fulfillment, not making promises in advertising) and the shift from prospecting with third party data to supporting customers with first party data. Even at the cutting edge, advertising innovation has now shifted to augmented reality, which integrates real-world experiences with advertising, and away from virtual reality, which replaces the real world entirely.

    This shift has substantial implications for martech.

    - The endless proliferation of martech tools may well continue, especially if the definition of “tools” stretches to include self-built applications. But the importance of tools that only interact with other software will diminish. What will grow will be tools that interact with the real world, and it’s likely those tools will be harder to find and (at least initially) take more skills to use. It’s the difference between building a flight simulator game and an actual aircraft. The stakes are higher when real-world objects are involved and there’s an irreducible level of complexity needed to make things work right.

    - As with all technology shifts, the leaders in the old world – the big software companies and audience aggregators like Facebook and Google – won’t necessarily lead in the new world. Reawakened anti-trust enforcement comes at exactly the worst moment for big tech companies needing to pivot. So we can expect more change in the industry landscape than we’ve seen in the past decade.

    - New skills will be needed, both to manage martech and to do the marketing itself. The new martech skills will involve learning about new technologies and tighter integration with non-marketing systems, although fundamentals of system selection and management will be largely the same. The marketing skill shift may be more profound, as marketers must master entirely new modes of interaction. But, again, the marketer’s fundamental tasks – to understand customer motivations and build programs that satisfy them – will remain what they always were.

    It’s been said that people overestimate short-term change and underestimate long-term change.  The shift from software to physical innovation won’t happen overnight and will never be total. But the pendulum has reversed direction and the world is now starting to eat software. Keep an eye out for that future.