Sunday, October 29, 2023

Does CDP Need a New Definition?

The earliest Customer Data Platform systems were introduced before 2010; the term CDP was coined to describe this emerging class in 2013. My definition had changed very little when we launched the CDP Institute in 2016, and has been the same ever since: “packaged software that builds a unified, persistent customer database accessible to other systems”. The Institute added the RealCDP checklist* in 2019 to attach more specifics to the definition in the hopes of helping buyers ensure a system that called itself a CDP could actually support the use cases they expected a CDP to support. By then, industry analysts were beginning to offer their own definitions which, while worded differently, were broadly consistent with the Institute definition. Even the major marketing suite vendors, who initially argued a separate (“persistent”) database wasn’t necessary, eventually discarded that position and introduced products that matched our criteria.

A successful concept like CDP quickly takes on a life of its own. It soon became apparent that many people were using CDP in a much looser sense to mean any system that built and shared customer profiles. This extended past packaged software to include custom-built systems and included systems whose scope was more limited than a true CDP. This expansion skewed some survey resuls but otherwise seemed relatively harmless; in any case, resistance seemed both pedantic and futile. What really mattered was these systems still gave CDP users access to the unified profiles.

Unfortunately, the evolution of the term didn’t stop there. As CDP became popular, many vendors adopted the label whether or not they actually met even the looser definition. At the same time, legitimate CDP vendors offered additional capabilities to analyze and deploy the data in the profiles. The resulting confusion ultimately led some vendors to avoid the CDP label entirely because it no longer provided a useful way to differentiate their products. Today, vendors seeking the latest label are more likely to call themselves digital experience managers than a CDP, even if their products meet the CDP requirements.

But the greatest challenge to the utility of the CDP label arose in the past few years when a number of vendors chose to claim that the core feature of the CDP – a dedicated database built by importing data from other systems, a.k.a. “persistence” – could be abandoned while still calling the result a CDP.  Their argument was the customer profiles could reside in a general purpose data warehouse, which most companies already had in place. 

The claim gained some plausibility from the fact that modern cloud data warehouse technologies, such as Snowflake, Google Big Query, and AWS Redshift, are in fact used by some conventional CDP vendors. The problem was they implicitly assumed that every company’s data warehouse had organized the data into useable customer profiles. In fact, very few data warehouses perform the specific tasks, most notably identity resolution, customer-level aggregation, and real-time response, needed to support CDP use cases. While it’s technically possible to add those features to an existing data warehouse, it’s usually a major project that often costs more, takes longer, and delivers less useful results than installing a separate, conventional CDP. (As always, the details depend on the situation.)

One positive result of the interest in warehouse-based profiles has been the decision of some CDP vendors to break their systems into modules that let users buy the data preparation functions separately from the rest of the CDP. This lets companies that want a warehouse-based system to still benefit from the mature capabilities those CDP vendors have developed over many years. These vendors have also often added the ability to combine data from a warehouse or other external system with data stored in a conventional, persistent CDP database, without actually loading that external data into the CDP. This gains some advantages of the warehouse-based approach, such as reduced data movement and storage costs, while retaining the benefits of the persistent CDP, such as greater control and flexibility.

You’ll note that all these changes affect the input side of the CDP data flow. It was once a very simple process: data from other systems was copied into the CDP, where it was formatted into profiles and shared with other systems. Now, the input process may be some mix of copying data into the CDP, reading fully-formed profiles from a warehouse, or combining internal and external data. By contrast, the delivery side of the process has remained the same: the CDP shares its profiles with other systems. 

In some cases, this has led to a subtle shift in perception of the CDP’s purpose: from a system that builds customer profiles, to a system that delivers those profiles to other systems. In this view, the core function of the CDP is to convert general purpose profiles into the specific formats needed by analytical, orchestration, and delivery systems (which we can call “activation systems” if you’ll trade some jargon for simplicity). This may still involve some data processing, such as advance calculation of model scores and aggregates to make them available in real time, and new data structures to hold the results of that processing. So there’s more happening here than a simple data transfer or API call.

Some people carry this shift even further and argue the CDP should really be defined as an activation system that reads profiles from somewhere else. Since three-quarters of the CDPs we track actually do have at least some activation capabilities, this isn’t quite as crazy as it may sound.

All that said, I’m not yet ready to redefine CDP. “Packaged software” and “persistent customer database” are meaningful terms that distinguish one configuration from other approaches (“custom software” and “external profiles”) that can also make profiles available to other systems. More important, the packaged/persistent configuration has significant cost and execution advantages over the custom/external alternative. So it’s important to avoid blurring the distinction between the two approaches. 

 At most, I’d offer retronyms that distinguish the “packaged CDP” (packaged/persistent) from the “warehouse CDP” (custom/external). By definition, the “warehouse CDP” doesn’t build its own profiles, so the term seems to ignore the fundamental function of the CDP.  You might call that oxymoronic, if not just plain moronic. But if we slightly redefine the core function of the CDP to be delivery of profiles rather than creating them, we can overcome that objection and, perhaps, give the market terms it intuitively understands.

You’ll also note the shift from profile creation to delivery puts the emphasis on the delivery side of the CDP flow, which we’ve noted has been the most stable and applies to both the packaged and warehouse approaches. This lets us join the trend to redefining CDP as a profile sharing system.

In short, the key distinction is whether the primary customer profiles are built and stored in the company data warehouse or in a separate CDP database. It’s worth maintaining that distinction because one approach relies on the company’s technical staff to assemble the functions needed to build and store the profiles, while the other relies on a packaged CDP to provide those functions. There are variations within each theme, including whether the warehouse uses modules from a CDP vendor to assemble its profiles or to help deliver them, whether real-time data is posted to the warehouse or held separately, and whether the CDP enriches its profiles with external data without importing that data. These are important from a practical standpoint, but do not affect the fundamental architecture of the system. Focusing first on where the profiles reside should help buyers understand the most important choice they have to make. This choice will in turn determine the other decisions they have to consider. 

 

_________________________________________________________

* ingest data from all sources, retain all details, keep the data as long as desired, build unified profiles, share the profiles with any other system, and enable real-time event-triggers and profile access.

Monday, September 18, 2023

Do Self-Service Systems Really Lead to Better Results? Our Member Survey Offers Surprising Answers to Industry Questions

The CDP Institute just published its annual Member Survey, which is always a treasure chest of interesting data. I’ve already published my primary analysis on the Institute site (you can download it here) but wanted to call out a number of findings that either contradict or confirm martech industry conventional wisdom. After all, nothing’s more fun than tweaking the nose of authority.

Data unification is growing: false. Most of us have a deep-rooted confidence in progress, at least when it comes to technology (human nature is another matter). The need for unified customer data has now been so widely understood for so long that we just naturally assume that more companies will have developed it. That has been true since the survey began in 2017 through last year’s survey: both CDP deployment and presence of a unified customer database have increased steadily. But both measures fell in the current survey. It’s hard to imagine companies have actually abandoned their unified databases, but even if growth has only slowed, that would be a big surprise.


Budget pressures are slowing industry growth: true. Some industry vendors report business is booming, but most will admit buyers are taking longer to make decisions. Sure enough, the fraction of vendors who reported growth in CDP investment is down from last year’s survey, both for the past year and the current year. 


Budgets may not be the only reason for slower growth, but the budget pressures rose more quickly than any other obstacle (from 20% to 29%). Cooperation, which can also be a symptom of budget pressures, grew the second-fastest (36% to 43%). 

 

Budget-pressed buyers are making smarter decisions: false. I can’t point to anyone who has made this exact claim, but think it’s implicit in reports that companies have more martech than they need and hope to simplify their stacks in the future. 

The survey does show that budget pressures are changing martech selection methods: more are selecting on cost (up from 43% to 54% for operating cost and 42% to 51% for initial cost), while selection on feature sophistication and breadth have fallen the most (26% to 15% and 41% to 24%).

Unfortunately, our surveys have consistently found that selection on cost correlates with low satisfaction with martech investments, while selection on features correlates with high satisfaction. So it seems that a short-term focus on cost is likely to cause long-term problems with martech results.

 

Martech departments are growing: true. The fraction of survey respondents who said they work in a martech department has more than doubled since the last survey, which was the first that listed martech as an option. It’s unlikely that the number of people working in martech has actually doubled in the past year, but it does seem reasonable to believe that some meaningful fraction of employees have been moved from marketing or IT into a dedicated martech department.

IT staff is playing a larger role in martech decisions: true. Despite the growth in respondents who work in martech departments, the current survey showed a small increase in the fraction of respondents who reported that that corporate IT manages their marketing technology (28% to 30%) and sharp declines in the fraction reporting that a martech team was in charge (43% to 32%) or each department runs its own martech (27% to 18%). This may be another cost-saving measure or it may reflect the growing importance attached to customer data (and martech in general) throughout the enterprise.

The bad news is that IT responsibility also correlates with lower martech satisfaction. Bear in mind that survey respondents are themselves mostly martech and marketing people, who are generally happiest when their own team is in charge. IT people probably give a different answer but are a small fraction of the survey respondents.


CDP projects are easy: false. Past surveys have found that about 60% of deployed CDPs are reported as delivering value, while the remaining 40% are struggling. I have always suspected most of the 40% are new projects that will deliver value eventually. The success rate is much higher in the current survey (80%), possibly because the slowdown in deployment has meant there are fewer new projects. But I'd want to see similar results in one or two other surveys before accepting that as a trend.

A separate question, asking vendors about success rates, finds the fraction reporting that say almost all or the majority of projects are successful has grown from 48% to 54%. While this might suggest some improvement in success rates, the more important message is that a bare majority of vendors say most CDP projects succeed.  Even when answers from CDP vendors are tabulated separately, just 68% say nearly all or a majority of projects are successful. Figures are lower for service providers (45%) and other respondents (15%). 

It's important to realize this is no worse than success rates for other large system deployments.  In fact, it's apparently better than average, since most studies put failure rates at 60% to 70%.  (See this page for a compendium.)  But these findings should dispel any notion that CDP deployments are easy. 


CDP projects fail because of technical complexity: false. It's also important to recognize that CDP failure rates are not due to any inherent problem with CDP technology.   As in previous surveys, by far the top reason for CDP project failure is organization. This has grown even more prominent in the current survey, while problems with poor requirements and CDP performance have fallen.

Comparing CDP status with satisfaction offers additional insight.  The satisfaction measure reflects success with martech in general, not with CDP in particular.  In other words, companies with a high score are "good at martech".  So, while it's not surprising that satisfaction rates are high among companies with a successfully deployed CDP and low among those who have not, this does support the position that CDP success is based more on the skills of the organization than CDP technology in particular. 

 



Privacy is growing more important: true. Last year’s survey showed a distressing decline in the priority given to data privacy regulations, with the share of firms making little effort to comply growing from 12% to 20%. That trend has now reversed, with the share of companies using privacy as a selling point increasing from 21% to 27%. 

Privacy was also listed in the CDP benefits question for the first time.  It was cited by 22% of respondents, ranking seventh of eleven items, and shows a below-average satisfaction score. This may indicate that most CDP users are looking elsewhere for their primary privacy management solution.

 

Self-service leads to success: false. The martech management section of this year’s survey added a new question about whether companies seek systems that empower business users to execute tasks without technical assistance. The concept of “no-code” is directly relevant here, although we didn't use the term.  This capability was by far the most common management technique, which wasn’t surprising: “empowering end-users” is a popular goal that saves money and makes users more effective.   But it also correlated with a low satisfaction score, which was surprising indeed. Is it possible that self-service doesn’t save money or make users more effective after all?

 

Answers to another survey question shed a bit more light. Among the options listed for CDP capabilities were self-service data extracts and self-service predictive models. Self service extracts were a common requirement (41%) correlated with a roughly average satisfaction rating, while self-service models were an uncommon requirement (8%) correlated with a very low satisfaction rating. My interpretation is that self-service extracts are a simple task that’s well understood by business users, so companies with well-run martech operations provide that capability.  By contrast, self-service predictive models are a complicated task that few users are equipped to handle on their own, so they are prioritized largely by companies with a poor understanding of what makes for martech success.  The larger message is that self-service should be deployed only when users are ready for it – and pushing beyond those limits can cause problems despite the apparent savings in cost and time.

 

Real-time processing is a high priority for most users: mixed. Real-time processing is commonly cited as an important CDP capability. In fact, the CDP Institute’s RealCDP requirements include real-time access to profiles and real-time event triggers. Again looking at the capabilities question, we see that real-time profiles are indeed a common requirement (42%), while real-time recommendations are much less common (15%) but correlate with very high satisfaction. The lesson here is that there are different types of real-time processing, and users consider some more important than others. Discussions about real-time should include similar nuance.

CDP must load data from all sources and retain full details: mixed. Both of these items are on the RealCDP requirements list. But loading data from all sources is the highest ranked capability (78%) and correlates with roughly average satisfaction, while loading full detail is cited by just 14% of respondents and correlates with much lower satisfaction. As with real-time and self-service, I take these answers to show that users have a mature understanding of what they do and don’t need from the CDP.  Loading all data sources is a core CDP promise and goes back to the problem CDP was originally designed to solve: getting access to all customer data. Storing full detail is not needed in most situations.  In practice, users choose which data elements are worth the cost. That said, I still believe that users want the option to store any particular details they need.

CDP users want to access their data warehouse directly: false. This year’s survey added a capabilities question about reading data from an external source without loading it into the CDP. This is a hot topic in the industry, both as a way of supplementing a traditional CDP (which maintains its own data store) and as a way of building a CDP-equivalent system that relies only on data in the enterprise data warehouse. Only a small fraction of respondents (14%) were interested in this and that group reported exceptionally low satisfaction levels. I interpret this to mean that, despite all the marketplace noise, there is actually very little interest among knowledgeable users in building a CDP that relies primarily on external data.

Summary

This report contains more bad news about industry growth and budget pressures than I like to see, but the tech sector’s troubles are well known and it’s not surprising that CDP projects should suffer with everyone else. I’m especially reluctant to highlight the data about success rates, because I know it will be taken out of context by vendors eager to promote alternatives to CDPs. I should stress again that the main obstacles to CDP success are organizational, not technical, and that success rates reported here are consistent with tech project success rates in general.  Bottom line: CDPs are major projects that don’t always go smoothly but are not more failure-prone than any other major IT effort.

What worries me more than bad news is the hints that companies are making poor decisions. Prioritizing cost over requirements will surely lead to more companies purchasing unsuitable products. Putting IT in charge of martech will almost surely lead to unhappy martech users. While budget issues are unavoidable, poor management decisions are unforced errors. Your company should work hard to avoid them.

All that said, the most interesting results are the ones that challenge conventional wisdom. Do self-service systems really lead to bad results? Is the importance of real-time overstated? Is there really so little interest in reading data directly from a warehouse? These are headline topics in martech today, and are often accepted as truth with little discussion. The questions are worth asking because the reality is almost always more complicated than the simple answers. When is self-service useful and when is it over-used? What kinds of real-time processes are really important? How is external data best integrated with a conventional CDP? Answering those questions will help companies make better decisions and ultimately lead to greater martech success.

Thursday, September 14, 2023

Unleashing the Power of Customer Data Platforms (CDPs) and AI: A Game-Changer for Modern Marketing

For some unknown reason, my last three presentations all started as headlines (two created by someone else) which I then then wrote a speech to match. This isn’t my usual way of working. It does add a little suspense to the writing process – can I develop an argument to match the title? It also dawns on me that this is the way generative AI works: start with a prompt and create a supporting text. That’s an unsettling thought: are humans now imitating AI instead of the other way around? Or have I already been replaced by an AI but just don’t realize it? How would I even know?

The latest in this series of prompt-generated presentations started when I noticed that the title shown in a conference agenda didn’t match the speech I had  prepared. When I pointed this out, the conference organizers said I could give any speech I want, but the problem was, I really liked their title: “Unleashing the Power of Customer Data Platforms (CDPs) and AI: A Game-Changer for Modern Marketing”. 

The idea of “unleashing” a CDP to run wild and exercise all its powers, is not something I get to talk about very often, since most of my presentations are about practical topics like defining a CDP, selecting one, or deploying one. And I love how “and AI” is casually tucked into the title like an afterthought: “there’s this little thing called AI, perhaps you’ve heard of it?”

And how about “game changer for modern marketing”? That’s an amazing promise to make: not just will you learn the true meaning of modern marketing, but you'll find how to change its very nature so it’s a game you can win. Who wouldn’t want to learn that?

This was definitely a speech I wanted to hear. The only problem was, the only way for that to happen was for me to write it. So I did. Here’s a lightly modified version.

Let’s start with the goal: winning the modern marketing game. The object of the game is quite simple: deliver the optimal experience for each customer across all interactions. And, when I say all interactions, I mean all interactions: not just marketing interactions, but every interaction from initial advertising impressions through product purchase, use, service, and disposal. I also mean every touch point, from the Internet and email through call centers, repair services, and the product itself.

That’s a broad definition, and you will immediately see the first challenge: all the departments outside of marketing may not want to let marketing take charge of their customer interactions. Nor is your company’s senior management necessarily interested in giving marketing so much authority. So marketing’s role in many interactions may be more of an advisor. The best you can hope for is that marketing is given a seat at the table when those departments set their policies and set up their systems. This requires a cooperative rather than a controlling attitude among marketers.

The second challenge to winning at marketing is the fragmented nature of data and systems. Most marketing departments have a dozen or more systems with customer data; at global organizations, the number can reach into the hundreds.  Expanding the scope to include non-marketing systems that interact with customers adds still more sources such as contact centers and support websites. Again, marketing will rarely control these. At best, they may have an option to insert marketing recommendations directly into the customer experience, such as suggesting next best actions to call center agents.

The third challenge is optimization itself. It’s not always clear what action will result in the best long-term results. A proper answer requires capturing data on how customers are treated and how they later behaved as a result. Some of this will come from non-marketing systems, such as call center records of actions taken and accounting system records of purchases. Again, those systems’ owners may not be eager to share their data, although it’s harder for them to argue against sharing historical information than against sharing control over actual customer interactions.

But the challenge of optimization extends beyond data access. Really understanding the drivers of customer behavior requires deep analysis and no small amount of human insight. Some questions can be answered through formal experiments with test and control groups. But the most important questions often can’t be defined in such narrow terms. Even defining the options to test, such as new offers or marketing messages, takes creative thought beyond what analysis alone can reveal.

And even if you could find the optimal treatment in each situation, the playing field itself keeps shifting. New products, offer structures, and channels change what treatments are available. New systems change the data that be captured for analysis. New tools change the costs of actions such as creating customer-specific content. These all change the optimization equation: actions that once required expensive human labor can now be done cheaply with automation; fluctuating product costs and prices change the value of different actions; evolving customer attitudes towards privacy and service change the appeal of different offers. The optimal customer experience is a moving target, if not an entirely mythical one.

None of this is news, or really even new: marketing has always been hard. The question is how “unleashing” the power of CDPs and AI makes a difference.

Let’s start with a framework. If you think of modern marketing as a game, then the players have three  types of equipment: data systems to collect and organize customer information; decision systems to select customer experiences; and delivery systems to execute those experiences. It’s quite clear that the CDP maps into the data layer and AI maps into the decision layer. This raises the question of what maps into the delivery layer. We’ll return to that later.

First, we have to answer the question: Why would CDP and AI be game changers? To understand that, you have to imagine, or remember, life before CDP and AI. Probably the best word for that is chaos. There are dozens – often hundreds -- of data sources on the data layer, and dozens more systems making choices on the decision layer. The reason there are so many decision systems is that each channel usually makes its own choices. Even when channels share centralized decision systems, those are often specialized services that deal with one problem, whether it’s product recommendations or churn predictions or audience segmentation.

CDP and AI promise to end the chaos by consolidating all those systems into one customer data source and one decision engine. Each of these would be a huge improvement by itself:

  • the CDP makes complete, consistent customer data available to all decision systems, and
  • AI enables a single decision engine to coordinate and optimize custom experiences for each individual.

Yes, we’ve had journey orchestration engines and experience orchestration engines available for quite some time, but those work at the segment level. What’s unique about AI is not simply that it can power unified decisions, but that each decision can each be tailored to the unique individual. 

But we’re talking about more than the advantages of CDP and AI by themselves. We’re talking about the combination, and what makes that a game-changer. The answer is you get a huge leap in what’s possible when that personalized AI decisioning is connected to a unified source of all customer information. 

Remember, AI is only as good as the data you feed it. You won’t get anything near the full value of AI if it’s struggling with partial information, or if different bits of information are made available to different AI functions. Connecting the AI to the CDP solves that problem: once any new bit of information is loaded into the CDP, it’s immediately available to every AI service. This means the AI is always working with complete and up-to-date data, and it’s vastly easier to add new sources of customer data because they only have to be integrated once, into the CDP, to become available everywhere.

To put it in more concrete terms, one unified decision system can coordinate and optimize individual-specific customer experiences across all touch points based on one unified set of customer data. 

That is indeed a game-changer for modern marketing, and it only reaches its full potential if you “unleash” the CDP to consolidate all your customer data, and “unleash” the AI to make all of your experience decisions.

That’s a lot of unleashing. I hope you find it exciting but you should also be a little bit scared. The question to ask is: How can I take advantage of this ‘game changing potential’ without risking everything on technology that is relatively new and, in the case of AI, largely unproven. Here’s what I would suggest:

Let’s start with CDP. So far, I’ve been using the term without defining it. I hope you know that a CDP creates unified customer profiles that can be shared with any other system. No need to get into the technical details here. What’s important is that the CDP collects data from all your source systems, combines it to build complete customer profiles, and makes them available for any and every purpose.

Some people reading this will already have a CDP in place but most probably do not. So my first bit of advice is: Get one. It’s not such easy advice to follow: a CDP is a big project and there are dozens of vendors to choose from, so you have to work carefully to find a system that fits your needs and then you have to convince the rest of your organization that it’s worth the investment. I won't go into how to do that right now. But probably the most important pro tips are: base your selection on requirements that are directly tied to your business needs, and ensure you keep all stakeholders engaged throughout the entire selection process. If you do those two things, you can be pretty sure you’ll buy a CDP that’s useful and actually used.

That said, there are some specific requirements you’ll want to consider that tie directly to ensuring your CDP will support your AI system. One is make sure you buy a system that can handle all data types, retain all details, and handle any data volume. This does NOT mean you should load in every scrap of customer-related data that you can find. That would be a huge waste. You’ll want to start with a core set of customer data that you clearly need, which will be data that your decision and delivery systems are already working with. This lets the CDP become their primary data source. Beyond that, load additional data as the demand arises and when you’re confident the value created is worth the added cost.

The second requirement is to ensure your CDP can support real time access to its data. That’s another complicated topic because there are different kinds of real time processes. What you want as a minimum is the ability to read a single customer profile in real time, for example to support a personalization request. And you want your CDP to be able to respond in real time to events such as a dropped shopping cart. Your AI system will need both of those capabilities to make the best customer experience decisions. What’s not included in that list is updating the customer profiles in real time as new data arrives, or rebuilding the identity graph that connects data from different sources to the same customer. Some CDPs can do those things in real time but most cannot. Only some applications really need them.

The third requirement relates to identity management. The CDP needs to know which identifiers, such as email, telephone, device ID, and postal address, refer to the same customer. At CDP Institute, we don’t feel the CDP itself needs to find the matches between those identifiers. That’s because there’s lots of good outside software to do that. We do feel the CDP needs to be able to maintain a current list of matches, or identity graph, as matches are added or removed over time. That’s what lets the CDP combine data from different sources into unified profiles.

My second cluster of advice relates to AI. I’d be surprised if anyone reading this hasn’t at least tested ChatGPT, Bing Chat Search, or something similar. At the same time, there’s quite a bit of research showing that relatively few companies have moved beyond the testing stage to put the advanced AI tools into production.

And that’s really okay, so my first piece of advice is: don’t be hasty.  You should certainly be testing and probably deploying some initial applications, but don’t feel you must plow full speed ahead or you’ll fall behind. Most of your competitors are moving slowly as well. 

That said, you do need to train your people in using AI. They don’t necessarily need to become expert prompt writers, since the systems will keep getting smarter so specific prompting skills will become obsolete quickly. But they do need to build a basic understanding of what the systems can and can’t do and what it’s like to work with them. That will change less quickly than things like a user interface. The more familiar your associates become with AI, the less likely they are to ask it to do something it doesn’t do well or that creates problems for your company.

Third, pay close attention to AI’s ability to ingest and use your company’s own data. Remember the game-changing marketing application for AI is to create an optimal experience for each individual. The means it must be able to access that individual’s data. This is an ability that was barely available in tools like ChatGPT six months ago, but has now become increasingly common. Still, you can bet there will be huge differences in how different products handle this sort of data loading. Some will be designed with customer experience optimization in mind and some won’t. So be sure to look closely at the capabilities of the systems you consider.

Fourth, and closely related, the industry faces a huge backlog of unresolved issues relating to privacy, intellectual property ownership, security, bias, and accuracy. Again, these are all evolving with phenomenal speed, so it’s hard for anyone to keep up – even including the AI specialists themselves. Unless that’s your full time job, I suggest that you keep a general eye on those developments so you’re aware of issues that might come with any particular application you’re considering. Then, when you do begin to explore an application, you’ll know to bring in the experts to learn the current state of play.

Similarly, keep an eye on the new capabilities that AI systems are adding. This is also evolving at incredible speed. Some of those capabilities may change your opinion of what the systems can do well or poorly. Some will be directly relevant to your needs and may form the basis for powerful new applications. We’re still far away from the “one AI to rule them all” that will be the ultimate game-changer for marketing. But it’s coming, so be on the alert.

This brings us back to the third level of marketing technology: delivery. Will there be yet one more game-changer, a unified delivery system that offers the same simplification advantages as unified data and decision layers? The giant suite vendors like Salesforce and Adobe certainly hope so, as do the unified messaging platforms like Braze and Twilio. The fact that we can list those vendors offers something of an answer: some companies think a unified delivery layer is possible and would argue they already provide one. I’m not so confident because new channels keep popping up and it’s nearly impossible for any one vendor to support them all immediately.

What seems more likely is a hybrid approach where most companies have a core delivery platform that handles basic channels like email, websites, and advertising and supports third-party plug-ins to add channels or features they do not. These platforms are already common, so this is less a prediction of the future than an observation of the present, which seems likely to continue. The core delivery platform offers a single connection to the decision layer run by the AI. This gives the primary benefits gained from a single connection between layers, although I wouldn’t call it a game-changer only because it already exists.

My recommendation here is to adapt the delivery platform approach, seeking a platform that is as open as possible to plug-ins so it can coordinate experiences across as many channels as possible. In this view, the delivery platform is really an orchestration system. Which channels are actually delivered in the delivery platform and which are delivered by third-party plug-ins is relatively unimportant from an architectural point of view.  Of course, vendors, marketers, and tech staff will all care a great deal about which tools your company chooses.

While we’re discussing architectural options, I should also mention that the big suites would argue that data, decision, and delivery layers should all be part of one unified product, reducing integration efforts to a minimum. That may well be appealing, but remember that most of the integrated suites were cobbled together from separate systems that were purchased by the vendors over time. Connecting all the bits can be almost as much work as connecting products from different vendors.

And, of course, relying on a single vendor for everything means accepting all parts of their suite – some of which may not be as good as tools you could purchase elsewhere. The good news is most suite vendors have connectors that enable users to use external systems instead of their own components for important functions. As always, you have to look in detail at the actual capabilities of each system before judging how well it can meet your needs.

So, where does this leave us?

We’ve seen that the object of the modern marketing game is to deliver the optimal experience for each customer. And we’ve seen that challenges to winning that game include organizational conflicts, fragmented data, and fragmented decision and delivery systems.

We’ve also seen that the combination of customer data platforms and AI systems can indeed be game-changing, because CDPs create the unified data that AI systems need to deliver experiences that are optimized with a previously-impossible level of accuracy.

CDPs and AI won’t fix everything. Organization conflicts will still remain, since other departments can’t be expected to turn over all responsibility to marketing. And fragmentation will probably remain a feature of the delivery layer, simply because new opportunities appear too quickly for a single delivery system to provide everything.

In short, the game may change but it never ends. Build strong systems that can meet current requirements and can adapt easily to new ones. But never forget that change is inherently unpredictable, so even the most carefully crafted systems may prove inadequate or unexpectedly become obsolete. Adapting to those situations is where you’ll benefit from investment in the most important resource of all: people who will work to solve whatever problems come up in the interest of your company and its customers.

And remember that no matter how much the game changes, the goal is always the same: the best experience for each customer. Make sure everything you do is has that goal in mind, and success will follow.

Saturday, July 01, 2023

In a World Run by AI, The Best Data Wins

Like everyone else in martech land, I’ve been pondering the future of marketing in a world populated with AI. Most research I’ve seen agrees with this Hubspot report  that marketers’ top application for generative AI has been content creation (48%), followed closely by data analysis (45%) and learning how to things (45%). So it’s fairly clear that the immediate impact of AI will be to let marketers create vastly more copy, including real-time messages tailored to specific individuals. 

Extending this a bit, we can expect those real-time messages to be delivered within ever-more-finely tailored campaign flows (something else that AI can easily generate) or without formal campaign structures at all. These messages will be orchestrated and optimized across all channels, fulfilling the omnichannel vision that has hovered like a mirage on the industry horizon for decades. 

(Whether this results in more or fewer martech applications is a separate question. I tend to agree with this GP Bullhound study, which argues that “Any application introduced as an AI-enhanced alternative to an existing application or as a feature to a platform will likely become redundant when that incumbent platform implements the same AI features.”  This suggests that fewer specialist martech products will be needed, since truly new applications are relatively rare. Moreover, the productivity benefits of integrated suites are magnified when AI can easily orchestrate tasks within the suite, but not those on the outside. And, AI systems are inherently more adaptable than traditional software, which must be explicitly programmed for each new task. So extending existing applications becomes easier when AI enters the picture.)

Of course, what the humans reading this piece really care about is the role that they will play in this AI-managed universe. (Come to think of it, the AIs reading this may also care more than we know.) We’re frighteningly close to living the old joke about the factory of the future, where the one human employee's job is to feed the dog, and the dog's job is to keep the human away from the equipment. 

The conventional answer to that question is that humans will still be needed to provide the creative and emotional insight that AIs cannot deliver. (That’s what ChatGPT told me when I asked.)  Frankly, I don’t buy it: just as people who can fake sincerity, have it made, the AIs will quickly learn to mimic creativity and find which emotion-based messages work best.

Still, let’s be a bit optimistic and assume the marketing department of the future includes one human whose job is to feed the AI. Let’s even assume that human adds some creative and emotional value. The net result across the marketing industry as a whole will be that every company produces a wonderfully high and consistent level of marketing outputs. While that’s great in many ways, it also means that better marketing will no longer be a competitive differentiator. It’s a repeat of the situation in manufacturing since the 1970’s, when quality best practices were applied everywhere, so the differences between the best and worst products were often too small to matter. The winners in that world were companies who could use marketing to differentiate what in fact were commodity products. But when AI lets every company produce great marketing, then marketing itself is also a commodity.

So, how will companies compete in this new world? I’ve already argued it won’t be through emotional insight or creativity. I briefly thought that people might do better than machines at adapting to rapid change. The theory was that AI can only be trained on historical data so it will flounder when faced with unexpected events – which are increasingly common. But, let’s face it: humans also flounder in the face the unexpected.  It’s not at all clear they’ll be better than machines at predicting abrupt change, recognizing changed circumstances, collecting new evidence, and finding the new best actions. In fact, humans are heavily biased in favor of making decisions based on past experience, so I’d probably bet on the AI.

But all is not lost. I still see two ways for companies, and the humans who run them (for now), to distinguish themselves.

The first is customer experience. If you consider the true commodity industries, such as telecommunications, air travel, hotels, and financial services, what makes customers loyal to one or another provider is rarely the actual product or price. Rather, it’s the way they are treated. As a baseline, customers expect reasonable service, delivered pleasantly. But loyalty is really won or lost when there’s a problem or special request. This gives the company a chance to distinguish itself, either against an actual competitor or against a customer’s expectations of how they should be treated. 

Company policies and systems play a large role in what’s possible, but ultimately it’s the front-line employee whose training, attitudes, and choices make or break the experience. In the future, AI will surely play a larger role in managing these interactions. But, as with marketing, this will yield largely similar results because companies will be using similar AI systems. The differentiator will be the company’s people.

That's good to know, but customer experience is rarely under marketers’ direct control. This leaves one final straw for them to lean on: the data used to train their AIs.

Remember, AI programs themselves will be widely available. As with any other technology, the difference in results will depend on how they’re used, not any difference in the technology itself. Once AI systems are fully deployed, most decisions about things like content and program design will be made by the system, limiting the impact of user choice on the outcomes. But the one thing that will remain under users’ control is the data fed into the AI systems. It’s differences in that data that will drive differences in outcomes. In short: whoever has the best data, wins.

This is an area where marketers have a major role to play. They may not control the various internal and external systems who provide customer data. But they will have a large say in what those systems feed into the primary customer data store which, in turn, will feed the AI. Marketers who select the best data feeds will have a more effective AI and, thus, better final results. This will be true even though AI makes data collection easier: that will reduce the technical barriers to data gathering at all companies, but most barriers are actually organizational and budgetary. Those are company-specific decisions whose outcomes marketers can affect.

This may not be the cheeriest news you’ve heard today. Few marketers chose their career because they wanted to fight political battles with IT and customer success teams. But it does mean that many marketers can continue to play a major role in the success of their organizations, even as most of the traditional marketing tasks are taken over by AI. No doubt, the marketers who remain employed will sneak a few of their creative and emotional insights into AI prompts.  Maybe their inputs will even make a positive difference. But what will really matter is how good a job they do at feeding the AI with the best possible data.  That's what will empower it to deliver better results than the competition.

Remember: in a world run by AI, the best data wins.

Friday, May 26, 2023

Chat-Based Development Changes the Build vs Buy Equation

The size of most markets is governed by a combination of supply and demand. Typically one or the other is limited: while there may be a bottomless appetite for chocolate, there is only so much cocoa in the world; conversely, while there is a near-infinite supply of Internet content, there are only so many hours available to consume it. The marketing technology industry has been a rare exception with few constraints on either factor. The supply of new products has grown as software-as-a-service, cloud platforms, low-code development tools, and other technical changes reduce development cost to something almost anyone with a business idea can afford, and widely available funding reduces barriers still further. Meanwhile, the non-stop expansion in marketing channels and techniques has created an endless demand for new systems. Further accelerating demand, the spread of digital marketing to nearly all industries has powered development of industry-specific versions of many product types.

Despite the visibility of these factors, the uninterrupted growth of martech has been accompanied almost from the start by predictions it will soon stop. This based less on any specific analysis than a fundamental sense that what goes up must come down, which is generally a safe bet. There’s also an almost esthetic judgement that a market so large is just too unruly, and that the growing number of systems used by each marketing department must indicate substantial waste and lost control.

One common metric cited as evidence for excess martech is utilization rate: Gartner, the high priests of rational tech management, reported last year that martech utilitzation rates fell from 58% in 2020 to 42% in 2022 and ranted in a recent press release that “The willingness to let the majority of their martech stack sit idle signifies a fundamental resource disconnect for CMOs. It’s difficult to imagine them leaving the same millions of dollars on the table for agencies or in-house resources. This trade-off of technology over people will not help marketing leaders accelerate out of the challenges a recession will bring.” They were especially incensed that their data showed CMOs are increasing martech’s share of the marketing budget, comparing them to “gamblers looking to write-off their losses with the next bet.” (They probably meant “recoup”, not “write-off” their losses.)

This isn’t just a Gartner obsession. Reports from Integrate, Wildfire, and Ascend2 also cite low utilization rates as evidence of martech overspending.

It aint necessarily so.

For one thing, underutilization is common in all departments, not just marketing.  Nexthink found half of all SaaS licenses are unused. Zylo put the average company-wide utilization at 56% and Productiv put it at 45%. (These studies measure app usage, not feature usage. But you can safely bet that feature usage rates are similarly low through the organization.)

More fundamentally, there’s no reason to expect people to use all the features of the products they buy. What fraction of Excel or Powerpoint features do you use? What’s important is finding a system with the features you need; if it has other features you don’t need, that’s really okay so long as you’re not paying extra or finding they get in your way.  Software vendors routinely add features required by a subset of their users. Since that helps them serve more needs for more clients, it’s a benefit, not a problem.

The real problem isn’t presence of features you don’t need, but the absence of features you do. That’s what pushes companies to buy new systems to fill in the gaps. As mentioned earlier, the great and on-going growth of the martech industry is due in good part to new technologies and channels creating new needs which existing systems don’t fill. That said, it's true that some purchases are unnecessary: buyers don’t always realize that a system they own offers a capability they need. And, since vendors add new features all the time, a specialist system may become redundant if the same features are added to a company’s primary system.

In both of those situations, avoiding unnecessary expense depends on marketers keeping themselves informed about what their current systems can do. This is certainly a problem: thanks to the miracle of SaaS, it’s often easier to buy a new system than the fully research the features of systems already in place. (Installing and integrating the new system will probably be harder, but that comes later.) So we do see reports of marketers trying to prune unnecessary systems from their stacks: for example, the previously-cited Integrate report found that 26% of marketers expected to shrink their stacks. Similarly, the CMO Council found 25% were planning to cut martech spend and Nielsen said 24% were planning martech reductions. (Before you sell all your martech stock, you should also know that each report found even more marketers were planning to increase their martech budgets: 32% for Integrate, 36% for CMO Council, and 56% for Nielsen.)

Let’s assume, if only for argument’s sake, that marketers are reasonably diligent about buying only products that close true gaps. New gaps will continue to appear so long as innovation continues, and there’s no reason to expect innovation will stop. So can we expect the growth in martech products will also continue indefinitely?

Until recently I would have said yes (unless budgets are severely crimped by recession). But the latest round of AI-based tools has me reconsidering. 

Specifically, I wonder whether marketers will close gaps by building their own applications with AI instead of buying applications from someone else. If so, industry expansion could halt.

Marketers have been using no-code technologies to build their own applications for some time. Some of these may have depressed demand for new martech products but I don’t think the impact has been substantial. That’s because no-code tools are usually constrained in some way: they’re an interface on top of an existing application (drag-and-drop journey builders), a personal productivity tool (Excel), or limited to a single function (Zapier for workflow automation). Building a complete application with such limited tools is either impossible or not worth the trouble.

The latest AI systems change this. Chat-based interfaces let users develop an application by describing what they want a system to do. This enables the resulting system to perform a much broader set of tasks than a drag-and-drop no-code interface. That said, the actual capabilities depend on what the model is trained to do. Today, it still takes considerable technical knowledge to include the right technical details in the instructions and to refine the result. But the AIs will quickly get better at working out those details for themselves, drawing on larger and more sophisticated information about how things should be done. Microsoft’s latest description of copilots and plug-ins points in this direction: “customers can use conversational language to create dataflows and data pipelines, generate code and entire functions, build machine learning models or visualize results.”

What’s important is that the conversational interface will drive a system that automatically employs professional-grade practices to ensure the resulting application is properly engineered, tested, deployed, and documented. In effect, it pairs the business user with a really smart developer who will properly execute what the user describes, let the user examine the results, and tweak the product until it meets her goals. Human developers who can do this are rare and expensive.  AI-based developers who can do this should soon be common and almost free.

This change overcomes the fundamental limitation of most user-built apps: they can only be deployed and maintained by the person who built them and are almost guaranteed to violate quality, security and privacy standards. This issue – let’s call it governance – has almost entirely blocked deployment of user-built systems as enterprise applications. Chat-built systems remove that barrier while fundamentally altering the economics of system development.  More concretely: building becomes cheaper so buying becomes less desirable.  This could significantly reduce the market for purchased software.

Anyone who has ever been involved in an enterprise development project will immediately recognize the flaw in this argument: there’s never just one user and most development time is actually spent getting the multiple stakeholders to agree on what the system should do. Agile methodologies mitigate these issues but don’t entirely eliminate them. 

Whether chat-driven development can overcome this barrier isn’t clear.  It will certainly speed some things up, which might enable teams to build and compare alternative versions when trying to reach a decision. But it might also enable independent users to create their own versions of an application, which would probably lead to even stiffer resistance to adopting someone else’s approach. 

One definite benefit should be that chat-based applications will learn to explain how they function in terms that humans can understand. Responsible tech managers will insist on this before they deploy systems those applications create.

In any event, I do believe that chat-built systems will make home-built software a viable alternative to packaged systems in a growing number of situations. This will especially apply to systems that fill small, specialized gaps created as new marketing technologies develop. Since filling these gaps has been a major factors behind the continuous growth of martech, industry growth may slow as a result.

Incidentally, we need a name for chat-based development. I’ll nominate “vo-code”, as shorthand for voice-based coding, since it fits in nicely with pro-code, low-code, and no-code. I could be talked into “robo-code” for the same reason.

Monday, May 15, 2023

Let's Stop Confusing LEGO Blocks with Computer Software

Can we retire Lego blocks as an analogy for API connections?  Apart from being a cliché that’s as old as the hills and tired as a worn-out shoe, it gives a false impression of how easy such connections are to make and manage.  

Very simply, all Lego block connectors are exactly the same, while API connections can vary greatly.  This means that while you can plug any Lego block into any other Lego block and know it will work correctly, you have to look very carefully at each API connector to see what data and functions it supports, and how these align with your particular requirements.   Often, getting the API to do what you need will involve configuration or even customization – a process that can be both painstaking and time consuming.  When was the last time you set parameters on a Lego block?

There’s a second way the analogy is misleading.  Lego blocks are truly interchangeable: if you have two blocks that are the same size and shape, they will do exactly the same thing (which is to say, nothing; they’re just solid pieces of plastic).  But no two software applications are exactly the same.  Even if they used the same API, they would have different internal functions, performance characteristics, and user interfaces.  Anyone who has tried to pick a WordPress plug-in or smartphone app knows the choice is never easy because there are so many different products.  Some research is always required, and the more important the application, the more important it is to be sure you select a product that meets your needs.  

This is why companies (and individuals) don’t constantly switch apps that do the same thing: there’s a substantial cost to researching a new choice and then learning how to use it.  So people stick with their existing solution even if they know better options are available.  Or, more precisely, they stick with their existing solution until the value gained from changing to a new one is higher than the cost of making a switch.  It turns out that’s often a pretty high bar to meet, especially because the limiting factor is the time of the person or people who have to make the switch, and very often those people have other, higher priority tasks to complete first.

I’ve made these points before, although they do bear repeating at a time when “composability” is offered as a brilliant new concept rather than a new label for micro-services or plain old modular design.  But the real reason I’m repeating them now is I’ve seen the Lego block analogy applied to software that users build for themselves with no-code tools or artificial intelligence.  The general argument is those technologies make it vastly easier to build software applications, and those applications can easily be connected to create new business processes.

The problem is, that’s only half right.  Yes, the new tools make it vastly easier for users to build their own applications.  But easily connected?  

Think of the granddaddy of all no-code tools, the computer spreadsheet.  An elaborate spreadsheet is an application in any meaningful sense of the term, and many people build wonderfully elaborate spreadsheets.  But those spreadsheets are personal tools: while they can be shared and even connected to form larger processes, there’s a severe limit to how far they can move beyond their creator before they’re used incorrectly, errors creep in, and small changes break the connection of one application to another.  

In fact, those problems apply to every application, regardless of who built it or what tool they used.  They can only be avoided if strict processes are in place to ensure documentation, train users, and control changes.  The problem is actually worse if it’s an AI-based application where the internal operations are hidden in a way that spreadsheet formulas are not.

And don’t forget that moving data across all those connections has costs of its own.  While the data movement costs for any single event can be tiny, they add up when you have thousands of connections and millions of events.  This report from Amazon Prime Video shows how they reduced costs by 90% by replacing a distributed microservices approach with a tightly integrated monolithic application.  Look here for more analysis.  Along related lines, this study found that half of “citizen developer” programs are unsuccessful (at least according to CIOs), and that custom solutions built with low-code tools are cheaper, faster, better tailored to business needs, and easier to change than systems built from packaged components.  It can be so messy when sacred cows come home to roost.

In other words, what’s half right is that application building is now easier than ever.  What’s half wrong is the claim that applications can easily be connected to create reliable, economical, large-scale business processes.  Building a functional process is much harder than connecting a pile of Lego blocks.

There’s one more, still deeper problem with the Lego analogy.  It leads people to conceive of applications as distinct units with fixed boundaries.  This is problematic because it creates a hidden rigidity in how businesses work.  Imagine all the issues of selection cost, connection cost, and functional compatibility suddenly vanished, and you really could build a business process by snapping together standard modules.  Even imagine that the internal operations of those modules could be continuously improved without creating new compatibility issues.  You would still be stuck with a process that is divided into a fixed set of tasks and executes those tasks either independently or in a fixed sequence, where the output of one task is input to the next.  

That may not sound so bad: after all, it’s pretty much the way we think about processes.  But what if there’s an advantage to combining parts of two tasks so they interact?  If the task boundaries are fixed, that just can’t be done.  

For example, most marketing organizations create a piece of content by first building a text, and then creating graphics to illustrate that text.  You have a writer here, and a designer there.  

The process works well enough, but what if the designer has a great idea that’s relevant but doesn’t illustrate the text she’s been given? Sure, she could talk to the writer, but that will slow things down, and it will look like “rework” in the project flow – and rework is the ultimate sin in process management.  More likely, the designer will just let go of her inspiration and illustrate what was originally requested.  The process wins.  The organization loses.

AI tools or apps by themselves don’t change this.  It doesn’t matter if the text is written by AI and then the graphics are created by AI.  You still have the same lost opportunity -- and, if anything, the AIs are even less likely than people to break the rules in service of a good idea.

What’s needed here is not orchestration, which manages how work moves from one box to the next, but collaboration, which manages how different boxes are combined so each can benefit from the other.

This is a critical distinction, so it’s worth elaborating a bit.  Orchestration implies central planning and control: one authority determines who will do what and when.  Any deviation is problematic.  It’s great for ensuring that a process is repeated consistently, but requires the central authority to make any changes.  The people running the individual tasks may have some autonomy to change what they do, but only if the inputs and outputs remain the same.  The image of one orchestra conductor telling many musicians what to do is exactly correct.  

Collaboration, on the other hand, assumes people work as a team to create an outcome, and are free to reorganize their work as they see fit.  The team can include people from many different specialties who consult with each other.  There’s no central authority and changes can happen quickly so long as all team members understand what they need to do.  There’s no penalty for doing work outside the standard sequence, such as the designer showing her idea to the copywriter.  In fact, that’s exactly what’s supposed to happen.  The musical analogy is a jazz ensemble although a clearer example might be a well-functioning hockey or soccer team: players have specific roles but they move together fluidly to reach a shared goal as conditions change.

If you want a different analogy: orchestration is actors following a script, while collaboration is an improv troupe reacting to each other and the audience.  Both can be effective but only one is adaptable.

Of course, there’s nothing new about collaboration.  It’s why we have cross-functional teams and meetings.  But the time those teams spend in meetings is expensive and the more people and tasks are handled in the same team, the more time those meetings take up.  Fairly soon, the cost of collaboration outweighs its benefits.  This is why well-managed companies limit team size and meeting length.

What makes this important today is that AI isn’t subject to the same constraints on collaboration as mere humans.  An AI can consider many more tasks simultaneously with collaboration costs that are close to zero.  In fact, there’s a good chance that collaboration costs within a team of specialist AIs will be less than communication costs of having separate specialist AIs each execute one task and then send the output to the next specialist AI.

If you want to get pseudo-mathy about it, write equations that compare the value and cost added by collaboration, with the value and cost of doing each task separately.  The key relationship as you add tasks is that collaboration cost grows exponentially while value grows linearly.*    This means collaboration cost increases faster than value, until at some point it exceeds the value.  That point marks the maximum effective team size.   

We can do the same calculation where the work is being done by AIs rather than humans.  Let’s generously (to humans) assume that AI collaboration adds the same value as human collaboration.  This means the only difference is the cost of collaboration, which is vastly lower for the AIs.  Even if AI collaboration cost also rises exponentially, it won’t exceed the value added by collaboration until the number of tasks is very, very large. 


Of course, finding the actual values for those graphs would be a lot of work, and, hey, this is just a blog post.  My main point is that collaboration allows organizations to restructure work so that formerly separate tasks are performed together.  Building and integrating task-specific apps won’t do this, no matter how cheaply they’re created or connected.  My secondary point is that AI increases the amount of profitable collaboration that’s possible, which means it increases the opportunity cost of sticking with the old task structure.  

As it happens, we don’t need to imagine how AI-based collaboration might work.  Machine learning systems today offer a real-world example the difference between human work and AI-based collaboration.  

Before machine learning, building a predictive model typically followed a process divided into sequential tasks: the data was first collected, then cleaned and prepped for analysis, then explored to select useful inputs, then run through modeling algorithms.  Results were then checked for accuracy and robustness and, when a satisfactory scoring formula was found, it was transferred into a production scoring system.  Each of those tasks was often done by a different person, but, even if one person handled everything, the tasks were sequential.  It was painful to backtrack if, say, an error was discovered in the original data preparation or a new data source became available late in the process.

With machine learning, this sequence no longer exists.  Techniques differ, but, in general, the system trains itself by testing a huge number of formulas, learning from past results to improve over time.  Data cleaning, preparation, and selection are part of this testing, and may be added or dropped based on the performance of formulas that include different versions.  In practice, the machine learning system will probably draw on fixed services such as address standardization or identity resolution.  But it at least has the possibility of testing methods that don’t use those services.  More important, it will automatically adjust its internal processes to produce the best results as conditions change.  This makes it economical to rebuild models on a regular basis, something that can be quite expensive using traditional methods.

Note that it might possible to take the machine learning approach by connecting separate specialist AI modules.  But this is where connection costs become an issue, because machine learning runs an enormous number of tests.  This would create a very high cumulative cost of moving data between specialist modules.  An integrated system will have fewer internal connections, keeping the coordination costs to a minimum.

I may have wandered a bit here, so let me summarize the case against the Lego analogy:

  • It ignores integration costs.  You can snap together Lego blocks, but you can’t really snap together software modules.
  • It ignores product differences.  Lego blocks are interchangeable, but software modules are not.  Selecting the right software module requires significant time, effort, and expertise.
  • It prevents realignment of tasks, which blocks improvements, reduces agility, and increases collaboration costs.  This is especially important when AI is added to the picture, because expanded collaboration is a major potential benefit from AI technologies.

Lego blocks are great toys but they’re a poor model for system development.  It’s time to move on.
 

___________________________________________________________
* My logic is that collaboration cost is essentially the amount of time spent in meetings.  This is a product of the number of meetings and the number of people in each meeting.  If you assume each task adds one more meeting and one more team member, and each meeting last one hour, then a one-task project has one meeting with one person (one hour, talking to herself), a two-task project has two meetings with two people in each (four hours), a three-task project has three meetings with three people (nine hours), and so on.  When tasks are done sequentially, there is presumably a kick-off at the start of each task, where the previous team hands the work off to the new team: so each task adds one meeting of two people, or two hours, a linear increase.  

There’s no equivalently easy was to estimate the value added by collaboration, but it must grow by some amount with each added task (i.e., linearly), and it’s likely that diminishing returns prevent it from increasing endlessly.  So linear growth is a reasonable, if possibly conservative, assumption.  It's more clear that cumulative value will grow when tasks are performed sequentially, since otherwise the tasks wouldn't be added.  Let's again assume the increase is linear.  Presumably the value grows faster with collaboration than sequential management, but if both are growing linearly, the difference will grow linearly as well.
 










Friday, May 05, 2023

Will ChatGPT Destroy Martech?

Like everyone else, I’ve been pondering what generative AI means for martech, marketing, and the world in general.  My crystal ball is no clearer than yours but I’ll share my thoughts anyway.

Let’s start by looking how past technology changes have played out.  My template is the transition from steam to electric power in factories.  This happened in stages: first, the new technology was used in exactly the same way as the old technology (in factories, this meant powering the shafts and belts that previously were powered by waterwheels or steam engines).  Then, the devices were modified to make better use of the new technology’s capabilities (by attaching motors directly to machine tools).  Finally, the surrounding architecture was changed to take advantage of the new possibilities (freed from the need to connect to a central mechanical energy source, factories went from being small, vertical structures to large horizontal ones, which allowed greater scale and efficiency).  We should probably add one more stage, when the factories started to produce new products that were made possible by the technology, such as washing machines with electric motors.

During the earliest stages of the transition, attention focused on the new technology itself: factories had “chief electricians” and companies had “chief electricity officers”, whose main advantage was they were among the few people who understood new technology.  Those roles faded as the technology became more widely adopted.  The exact analogy today is “prompt engineer” in AI, and will likely be even shorter-lived as a profession.  

Most of the discussion I see today about generative AI is very much stuck in the first phase: vendors are furiously releasing tools that replace this worker or that worker, or even promising a suite of tools that replace pretty much everyone in the marketing department.  (See, for example, this announcement from Zeta Global https://zetaglobal.com/press-releases/zeta-introduces-generative-ai-agents-powered-by-zoe/  .)  Much debate is devoted to whether AI will make workers in those jobs more productive (hurrah!) or replace them entirely (boo!)   I don’t find this particular topic terribly engaging since the answer is so obviously “both”: first the machines will help, and, as they gradually get better at helping, they will eventually take over.  Or, to put it in other terms: as humans become more productive, companies will need fewer of them to get their work done.  Either way, lots of marketers lose their jobs.  

(I don’t buy the wishful alternative that the number of marketers will stay the same and they’ll produce vastly more, increasingly targeted materials.  The returns on the increasing personalization are surely diminishing, and it’s unrealistic to expect company managers to pass up an opportunity to reduce headcount.)

While the exact details of the near future are important – especially if your job is at stake – this discussion is still about the first stage of technology adoption, a one-for-one replacement of the old technology (human workers) with the new technology (AI workers).   The much more interesting question is what happens in the second and third stages, when the workplace is restructured to take full advantage of the new technology’s capabilities.

I believe the fundamental change will be to do away with the separate tasks that are now done by specialized individuals (copywriters, graphic designers, data analysts, campaign builders, etc.).  Those jobs have evolved because each requires complex skills that take full time study and practice to master.  The division of labor has seemed natural, if not inevitable, because it mirrors the specialization and linear flow of a factory production line – the archetype for industry organization for more than a century.  

But AI isn’t subject to the same constraints as humans.  There’s no reason a single AI cannot master all the tasks that require different human specialists.  And, critically, this change would bring a huge efficiency advantage because it would do away with the vast amount of time now spent coordinating the separate human workers, teams, and departments.  There would be other advantages in greater agility and easier data access.  Imagine that the AI can get what it needs by scanning the enterprise data lake, without the effort now needed to transform and load it into warehouses, CDPs, predictive modeling tools, and other systems.  Maintaining those systems takes another set of specialists whose jobs likely to vanish, along with all the martech managers who spend their time connecting the different tools.  

Of course, the vision of “citizen developers” using AI to create sophisticated personal applications on the fly is entirely irrelevant when the citizen developers themselves no more have jobs.  Thousands of independent applications that make up today’s martech industry may vanish, unless the marketing Ais build and trade components among themselves – which could happen.

So far, I’ve predicted that monolithic AI systems rather than teams of (human or robotic) specialists will create marketing programs similar to today’s campaigns and interactions.  But that assumes there’s still a demand for today’s types of campaign and interactions.  This brings us to the final type of change: in the outputs themselves.

Again, we may be in for a very fundamental transformation.  The output of a marketing department is ultimately determined by the how people buy things.  It’s a safe bet that AI will change that dramatically, although we don’t know exactly how.   For sake of argument, let’s assume that people adopt AI agents to manage more of their personal lives for them (pretty likely) and that they delegate most purchasing decisions to those agents (less certain but plausible, and again already happening to a limited degree).  If that happens, our AI marketing brains will be selling to other Ais, not to people.  To imagine what that looks like, we again have to move beyond expecting the AI to do what people do now, and look at the best way for an AI to achieve the same results.

If you think about marketing today – and for all the yesterdays that ever were – it’s based on the fundamental fact that humans have a limited amount of attention.  Every aspect of marketing is ultimately aimed to capturing that attention and feeding the most effective information to the human during the time available.  

But AIs have unlimited attention.

If an AI wants to buy a product, it can look at every option on the market and collect all the information available about each one.  Capturing the AI’s attention isn’t an issue; presenting it with the right information is the challenge.  This means the goal of the marketing department is to ensure product information is available everyplace the AI might look, or maybe in just one place, if you can be sure the AI will look there.   Imagine the world as a giant market with an infinite number of sellers but also buyers who can instantly and simultaneously visit every seller and gather all the information they provide.  The classical economist’s fantasy – perfect market, perfect information, no friction – might finally come true.

And, also as the classical economists dream, the buyers will be entirely rational, not swayed by emotional appeals, brand identity, or personal loyalties.  (At least, we assume that AI buyers are rational and objective, although it won’t be easy to ensure that’s the case. That relates to trust, which is a topic for another day.)

If the role of marketing is to lay out virtual products on a virtual table in a virtual market stall, there’s no need for advertising: every buyer will pass by every stall and decide whether to engage.  With no need for advertising, there’s no need for targeting or personalization and no need for personal data to drive that targeting or personalization.  Privacy will be preserved simply because advertisers will no longer have any reason to violate it.

The key to business success in this world of omniscient, rational buyers is having a superior product, and, to a lesser extent, presenting product information in the most effective way possible.  There’s still some room for puffery and creativity in the presentation, although presumably mechanisms such as consumer reviews and independent research will keep marketers reasonably honest.  (Trust, again.)   There’s probably more room for creativity in developing the products themselves and constructing a superior experience that extends beyond the product to the full package including pricing, service, and support.

We can expect the AIs to play a major role in developing those new and optimal products and experiences, although I suspect the pro-human romantic in everyone reading this (except you, Q-2X7Y) hopes that people will still have something special to contribute.  But, wherever the products themselves come from, it will be up to the marketing AI to present them effective to the AI shoppers.

(Side note: today’s programmatic ad buying marketplace comes fairly close to the model I’m proposing.  The obvious difference is the auction model, where buyers bid for a limited supply of ad impressions.  It’s conceivable that the consumer marketplace would also use an auction.  Again, just because most of today’s shopping is based on a fixed price model, we shouldn’t assume that model will continue in the future.  Come to think of it, an auction would probably be the best approach, since buyers could adjust their bids based on their current needs and preferences, and sellers could adjust them based on inventory and current demand.  In the traditional marketplace, this would be called haggling, or negotiating, and it's the way buying has been done for most of history.  With perfect information on both sides, the classical economists would be pleased yet again. It could be fruitful to explore other analogies with the programmatic marketplace when trying to predict how the AI-to-AI marketplace will play out.)

(You could also argue that Amazon, Expedia, and similar online marketplaces already offer a place for virtual sellers to offer their virtual wares to all comers.  Indeed they do, but the exact difference is that searching on Amazon requires a painfully inefficient use of human time.  If Amazon evolves a really good AI-based search method, and can convince users to share enough data to make the searches fully personalized, it could indeed become the basis for what I’m proposing.  The biggest barrier to this is more likely to be trust than technology.  It’s also worth noting that traditional marketing barely exists on those marketplaces.  The travel industry, where most marketing is centered on loyalty programs, may be an early indicator of where this leads.)

So what role, exactly, do humans play in this vision?  

As consumers, humans are no longer buyers.  Instead, they receive what’s purchased on their behalf.  So, their main role is to pick an AI and train it to understand their needs.  Of course, most of that training will happen without any direct human effort, as the AI watches what its owner does.  (I almost wrote “master”, but it’s not clear who’s really in charge.)  For people with disposable income, purchases are likely to move away from basic goods to luxury goods and experiences.  Those are inherently less susceptible to purely rational buying decisions, so there’s a good chance that conventional buying and attention-based marketing will still apply.

As workers, humans are in trouble.  Farming and manufacturing have been shrinking for decades and AI is likely to take over many of the remaining service jobs.  Some conventional jobs will remain to do research and supervise the AI-driven machines, and there may more jobs where it matters that the worker is human, such as sports and handcrafts, and where human interaction is part of the value, such as healthcare.  But total employment seems likely to decrease and income inequalities to grow.  It’s possible that wealthy nations will provide a guaranteed annual income to the under-employed.  But even if that happens, meaningful work will become harder to find.

I’ll admit this isn’t a terribly pleasant prospect.  The good news is, predictions are hard, so the odds are slim that I’m right.   I’m also aware that we’re at the peak of the hype cycle for ChatGPT and perhaps for AI in general.  Maybe what I’ve described above isn’t technically possible.  But, given how quickly AI and the underlying technologies evolve, I wouldn’t bet on technology bottlenecks blocking these changes indefinitely.  Quantum AI, anyone?

All that said, of the three major predictions, I’m most confident about the first.  It’s pretty likely that a monolithic marketing AI will emerge from the specialized AI bots that are being offered today.  The potential benefits are huge, the path from separate bots to an integrated system is an incremental progression, and some people are already moving in that direction.   (Pro tip: it’s easier to predict things that have already happened.)

The emergence of an AI-driven marketplace to replace conventional human buying is much less certain.  If it does happen, the delegation will emerge in stages.  The first will cover markets where the stakes are low and buying is boring.  Groceries are a likely example.  How quickly it spreads to other sectors will depend on how much time people have and how much intrinsic enjoyment they derive from the shopping itself.

The role of humans is least predictable of all.  Mass under-employment probably isn’t sustainable in the long run, although you could argue it’s already the reality in some parts of the world.  The range of possible long-term outcomes runs from delightful to horrific.  Where we end up with depend on many factors other than the development of AI.  The best we can do is try to understand developments as they happen and try to steer things in the best directions possible.