But there’s another model I find more helpful. This looks not at who adopts new technologies but at how they’re deployed over time. There's a consistent pattern:
- Substitution (the new technology is deployed to exactly replace the old technology without changing the surrounding processes or workflows)
- Transformation (the process is adapted to take advantage of the capabilities of the new technology)
- Infrastructure change (new infrastructure is developed to support the transformed process)
- Business model change (a new business model emerges to take advantage of the transformed processes and infrastructure)
I haven’t found any single source that presents this model in quite these terms, which may mean I could name it after myself, although for now I'll just use STIB. Some close matches include Ruben Puentedura’s SAMR Model for educational technology (substitution, augmentation, modification, redefinition), Deloitte’s Three Horizons Model (process optimization, process flow and quality, new business models) and Carlota Perez’s theory of technological revolutions (radical innovation, initial optimization, incremental innovations, maturity).
The paradigmatic example, at least for me, is deployment of electric power in U.S. factories. This started with connecting electric motors to the shaftworks that were previously driven by waterwheels or steam engines (substitution). After several intermediate changes, the endpoint was attaching a separate motor directly to each machine tool (transformation). This enabled an infrastructure change: no longer constrained by the limits of mechanical power transfer or proximity to water or coal, factories became cleaner, larger, single story, and located near other resources or markets. The new infrastructure resulted in a new business model featuring centralized mass production and standardized national brands.
Perhaps you can't relate to machine tools. Fair enough. Consider the automobile instead. The first cars were literally horseless carriages, pretty much the same as horse-drawn carriages except that they used a mechanical motor instead of a horse. That was substitution. Over time, the design of the cars was modified to take better advantage of having an internal combustion engine. That was transformation. Once it became clear the new model was going to be a success, manufacturers developed new, mass-production business models that better suited the new products and processes. Ultimately, this was supported by a new infrastructures of roads, filling stations, repair shops, traffic rules, licensing requirements, insurance products, and ultimately an entire, auto-based suburban landscape.
This may seem academic, but it applies directly to something you probably care about: the growth of AI. Seen through the STIB framework, most of the AI applications we see today are substitutions: an AI copywriter replaces a human copywriter in an unchanged workflow. The omnipresent co-pilots are another, even less disruptive type of substitution: they help humans perform the same tasks more efficiently, again without changing the workflow.
But it’s clear that these substitutions don’t take full advantage of AI’s potential. Again referring to the STIB framework, the question is what the transformed applications will look like. To answer this, we conduct a thought experiment: what would the process look like if it were redesigned to make best use the new technology? For electric motors in factories, the final form was motors attached directly to machine tools. For most customer data and customer-facing operations, the final form of AI is likely to be a single process that completes all the previously-separate steps at once. This is because AI is not limited by the need to have different specialists perform each step in the workflow, a constraint which results from the inability of mere humans to master more than one specialty, and from the need for experts to check the output of each step before moving on to the next.
But I don't think unified execution is the final form of the AI transformation. A deeper change is likely to remove discrete units such as customer segments, content pieces, campaign flows, and maybe even standardized products. Those exist because humans could only manage small numbers of segments, messages, campaigns, and products. An AI could handle more-or-less infinite numbers of these, which in practice would mean treating each customer and each interaction individually.
This leads to an end-state of “hyper-personalized” messaging, where content is custom generated on the fly for each customer and context.
Imagine an all-knowing, all-seeing bot that listens to what’s happening in the market and jumps into action each time it sees an opportunity to do something useful. The action will be optimized using all relevant data, including the company’s own information about the customer; second-party, third-party, and public information about the customer; behaviors of other customers; and market conditions, inventory, and who knows what else. In another dimension, this listening can extend beyond company-owned systems such as websites and contact centers, to include appearance of customers on third-party sites (already available to some degree through programmatic ad bidding) and even in walled gardens (which already receive lists of customers to watch for; the change would be to open a channel that lets the company assess the situation, generate the optimal message, and send it back for the walled garden for delivery.)*
It should immediately be clear that this vision requires infrastructure and business model changes from what’s available today. A much-improved data sharing infrastructure is needed to monitor behavior and access data outside of company-owned systems. This implies new business models to compensate external data owners for access to their information. Perhaps the data owners would charge a fee for letting companies monitor their data streams or query their data stores; or maybe they would only charge for data that a company uses; or perhaps the fees will be based on outcomes such as clicks or sales. Most of these schemes will ultimately require some way to estimate the value contributed by a particular piece of data.
Hyper-personalized message delivery requires more infrastructure and business model innovations. One of the most important developments in marketing today is the emergence of new channels that allow direct customer interaction: these include interactive TV, social commerce, online games, retail media, and even interactive podcasts and out-of-home advertising. All are alternatives to common web display ads, social media ads, and search ads, which are also becoming more interactive. As with data, the key change made possible by AI is the ability to monitor vastly more opportunities at once, to evaluate the potential of each opportunity in real time, and to take advantage of the opportunities offering the greatest value.
I certainly hope that everyone reading this realizes that what I’m describing is far beyond the capabilities of today’s AI systems. The data access process requires AI to continuously ingest, clean, and integrate data from multiple sources and to automatically adapt as new sources appear and established sources change. Remember that AI is just beginning to address the bottleneck of incorporating new data sources into today’s CDP and warehouse systems. Similarly, we’re just beginning to see AI systems deliver intermediate steps on the way to hyper-personalized messaging. Today’s cutting edge is automated campaign design, which at best (and with much-needed human quality checks) could transform a user’s prompt into a complete campaign package of audience selection, content, and delivery rules and then execute that package. While impressive, this still uses the conventional structure of a few, discrete segments, content pieces, and campaign flows. That makes it closer to substitution than true transformation.
Another way to look at this is that the vision offers a roadmap for future AI development. The current frontiers in AI are goal-seeking agents, access to external data (Anthropic’s Model Context Protocol), and agent cooperation (Google’s Agent-to-Agent). (See this Medium post for a good overview of these.) If my vision is correct (which is by no means certain), steps beyond those frontiers will include proactive data gathering and integration, automated data value assessments, greater situation awareness, and better simulation of human behaviors. (I'd really like to say "understanding" of human behaviors but don't think we can quite attribute that to AI.) AI will also need more economical processing and reliable guardrails against hallucinations, biases, privacy breaches, and generally bad behavior.
The table below offers a more detailed view of where I think things are headed. It looks at four major customer data processes: customer data management, people issues related to customer data management, customer data activation, and advertising. For each process, it lists the required capabilities for each of the four diffusion stages. You can think of these as requirements for new, AI-based products.
AI Applications for Customer Data (STIB Model) | ||||
---|---|---|---|---|
|
Data Management | People | Activation | Advertising |
Substitute [execute via co-pilots and agents] | Data collection, ID resolution, connectors, metadata | Understand applications, requirements, training | Segmentation, analytics, prediction, sharing, privacy | Audience assembly, media buys, data buys |
Transform [execute via unified AI systems] | Unified process, data as service | Management tools, define goals/prompts, explore opportunities | Hyper-personalize messages | Deliver best customer, data, channel/media; optimize spend |
Infrastructure [required capabilities; many delivered via AI] | Automated data access, security, privacy, quality, transforms | Learning systems, training systems, process design systems | Efficient processing, attribution, instant commerce, buyer agents | Secure data sharing, consented data assembly, contextual targeting, marketplaces, fractional billing |
Business Model [rely on AI for analytics and operations] | Value-based pricing | Pay for skill achievement | Goal achievement, Sales as a service | Goal achievement, audience as a service, value-based pricing |
Where does all this lead? Here are the main points I hope you’ll take away:
1. The impact of AI on customer data is just beginning. We can expect AI to be deployed in the same pattern as other technologies. At first, it will substitute for humans or non-AI systems by performing the same tasks within existing workflows. Over time, it will transform those workflows into new processes that take full advantage of what AI can do. Ultimately, the industry will develop new business models and infrastructures to support the transformed processes.
2. For developers: consider which stage your AI project is targeting. While substitution is low-hanging fruit, bear in mind that current processes will soon be obsolete. Consider developing products that support transformed processes and their related business models and infrastructures.
3. For users: current self-service AI tools may enable you to build your own substitutions with minimal investment. Unlike system developers, you can afford to deploy these now and then discard them when something better comes along.
4. If you have a vision for a transformed process, you could try to build it. It won't achieve its full potential because the supporting AI capabilities, infrastructure, and business models are not yet available. But it might deliver enough value to justify the investment: imagine hyper-personalization based only on one company's own data and deployed only on the company's own customer-facing systems.
5. The final form of the transformed process will emerge over time. As companies experiment with different approaches, the industry will converge on an optimal design. Commercial developers will then build systems with this design and flesh out the supporting AI features, infrastructures and business models. As with most complex systems, commercial vendors will probably own most of the market because they can afford to invest more in their products than most individual firms.
6. Standardized integration mechanisms will play an important role in this new world. This assumes I'm right that the transformed process will rely heavily on connecting to external data and external delivery channels. This should make integration a particularly fruitful area for investment if you're a developer, or for building expertise if you're a user.
__________________________________________________________________________________
*As a practical matter, it's unlikely that thousands of companies would separately monitor thousands of data sources and delivery touchpoints. It's more likely that data sources and delivery systems will broadcast event information which marketers can read and react to as they see fit. Perhaps the sources and touchpoints will package their events into channels covering different customer groups or event types and let marketers subscribe to the channels they want.