Thursday, October 30, 2025

What would an AI-native marketing process look like? ChatGPT's answer is better than I expected

If you look at recent vendor announcements, it seems that the cutting edge in applying AI to marketing is 'end to end' campaign generation, where a collection of agents work together to build and execute a complete marketing campaign with no human effort (although, for practical purposes, still under human supervision).  As I've written before, this strikes me as an advanced type of 'substitution', where the new tech is used to execute the old processes better and faster.  That's fine and valuable but it's only the first step in taking advantage of the new technology.  The final step is to develop new, "transformed" processes that are optimzed to make full use of the new tech's capability.  

I've recently been asking myself and others what they think that "transformed" process would look like for marketing.  I'm still collecting my thoughts but, as part of the process, asked ChatGPT for its opinion.  The answer was better than I expected.  I don't necessarily agree with all of it but thought it was interesting enough to present to the larger world for comment.  So, please, let me know what you think.

Prompt: if marketing techniques were designed from scratch to take full advantage of AI, what would they look like?

That’s an excellent and forward-thinking question — and the answer’s pretty transformative.

If we were to design marketing from the ground up for AI (instead of retrofitting AI into traditional marketing), we’d end up with a system that looks less like “campaigns” and more like continuously learning ecosystems that respond in real time to individuals and social dynamics.

Here’s a breakdown of what that might look like:


1. Autonomous, Adaptive Market Intelligence

Instead of periodic market research and segmentation,
AI-first marketing would run on constant sensing loops:

  • AI agents continuously scan social media, purchase data, and search patterns.
  • They detect cultural shifts, emerging memes, and micro-trends as they form.
  • Personas aren’t static — AI builds and dissolves them dynamically based on live data.

Analogy: You don’t define “target audiences.” You discover living patterns of attention in real time.


2. Hyper-Personalized Narrative Engines

Instead of a single ad creative for millions of viewers,
AI-first marketing generates unique narrative journeys per person:

  • Language models craft individualized story arcs and product framing that match someone’s identity, tone preferences, and current emotional state.
  • Multimodal models (text, image, video, audio) assemble personalized content on the fly — like an AI “showrunner” creating micro-stories for each user.
  • Brand consistency is enforced by a central “brand DNA model” ensuring all outputs feel coherent and on-brand, even when autonomously generated.

3. Predictive and Generative Commerce

Instead of advertising then selling,
AI-first marketing anticipates needs and builds products or bundles before people articulate them:

  • Predictive demand modeling uses behavioral and environmental data to suggest what products should exist.
  • AI may prototype and A/B-test new offerings via virtual storefronts, gauging interest before production.
  • Marketing and product design merge — you market possibilities before they exist, and only produce what the market proves it wants.

4. Multi-Agent Relationship Systems

Instead of one-way brand broadcasts,
AI-first marketing uses relationship agents that represent both consumers and brands:

  • Consumers might have personal “shopping AIs” negotiating with brand AIs.
  • Trust, transparency, and reputation become measurable assets traded between AIs.
  • The marketing goal shifts from persuasion to mutual optimization — aligning a person’s needs and a company’s offerings through agent collaboration.

5. Real-Time Ethics and Trust Architecture

AI-driven marketing would require built-in ethical governors:

  • Explainability modules ensure users understand why they’re being shown something.
  • Consent becomes dynamic and contextual (you can allow certain data uses for limited time).
  • Bias-detection systems continuously audit generative campaigns for fairness and cultural sensitivity.

6. Emotionally Intelligent Interaction

Instead of static messaging,
AI-first marketing converses, empathizes, and co-creates:

  • Emotional state detection (via tone, text, or visual cues) guides responses.
  • The AI adjusts its persuasive approach — sometimes offering information, sometimes humor, sometimes retreat — like a human skilled in timing and empathy.
  • Brands become more like adaptive characters than faceless institutions.

7. Closed Feedback Loop: Learn → Generate → Test → Learn

Every interaction feeds a continuous reinforcement loop:

  • Each ad, email, or product suggestion is an experiment.
  • The system measures micro-responses (hover time, tone change, conversion intent).
  • Those signals retrain the creative and strategy models daily.

In summary

AI-native marketing is:

  • Continuous (not campaign-based)
  • Conversational (not broadcast)
  • Collaborative (AI agents on both sides)
  • Generative (creating narratives, products, and experiences dynamically)
  • Ethically-audited and explainable (trust is as important as persuasion)

 

Sunday, October 26, 2025

How Will GenAI Change the Consulting Industry?



Reports on the threat that generative AI poses to knowledge workers usually focus what might be called “structured creativity”: tasks like copywriting or computer programming where outputs are clearly specified through things like creative briefs or system requirements. One group that generally doesn’t consider itself at high risk are the analysts and consultants who write the reports themselves. After all, their work requires insights and judgements that are beyond the current capabilities of genAI, which ultimately does nothing more than play back the training data it's fed.

Case in point: This widely reported BCG study found that genAI tools substantially improved performance of consultants at product planning, with the greatest improvements among below-average performers. A second finding of the same study was less widely noted: AI depressed performance at generating insights from interviews and raw data. This dovetails nicely with the idea that AI can help with structured tasks such as campaign management but isn’t (yet) capable of developing new insights. One reason that AI struggles with insights is its well-known tendency to hallucinate, which would translate into false conclusions about the meaning of input data. (See, for example this BBC study that found 47% of AI chatbot answers contained significant errors about data sources or accuracy.)*

So it seems that the survey analyzers are indeed safe, at least for the time being. But while insights are an important part of the value provided by analysts and consultants, they’re just a small fraction of their work. In my experience, the main reasons that companies hire consultants are to access their skills at organizing a project and their knowledge of vendors to consider hiring.** Those are things that genAI does extremely well: ChatGPT will gladly provide a plausible project plan for nearly any task and will just as merrily answer your questions about which vendors to consider, providing detailed comparisons on relevant features. It's true that those comparisons will often be inaccurate (see the BBC study or do your own test in a field you know well). But, let’s face it, consultants also make mistakes about product capabilities, which change much faster than even the most diligent industry expert can keep up. So, whether a vendor list comes from genAI or a human consultant, it still has to be validated with current information before you base any decisions on it.

Even more to the point: just as it’s always been hard for clients to tell truly knowledgeable consultants from pretenders, it’s now equally hard to tell accurate genAI outputs from hallucinations. This applies to frameworks, vendor lists, and even those insights where humans are still clearly superior. GenAI is famously capable of providing reasonable-sounding arguments based on false premises, citing false evidence, and leading to wrong conclusions. A true optimist would argue those can lead to fruitful discussions even when they start with an error. A more realistic expectation is that some large fraction of people will accept them as delivered. 

This is especially true when it comes to comments on industry trends and futures, where there is no clearly right answer. A large part of the industry conversation today, especially in media like LinkedIn, is exactly that sort of discussion. AI-generated comments are increasingly common and, truth be told, can be hard to tell from the human-generated ones. Sure, that's a tough situation for industry members trying to get good information. But save some sympathy for the poor human consultants who are trying to be heard – and recognized as legitimate experts – amid all the noise. 

Indeed, the flood of content is arguably the biggest change in marketing in recent years. Not so long ago, publishing good content on a regular basis was an effective, low-budget path to success. The mechanism was simple: good content was picked up via social media sharing and, if it became popular enough, registered in search engine results. Today, not only is there too much content for anyone who’s not already famous to gain a significant share of attention, but AI search summaries will scoop up good information without leading readers back to the source. The alternative path of publishing on high-traffic news sites is equally blocked since AI search engines stifle traffic to those sites as well. Of course, a handful of influencers who are skilled at attracting attention can still become Internet-famous.  But what gains attention isn’t necessarily what’s of true value.***

I’ll skip the usual moaning about bad information driving out the good, since the consequences are all around us. Instead, I’ll posit that the situation is untenable because people simply will not be willing to consume such huge volumes of content indefinitely. There is already a backlash among people who want to reduce the time they spend trying to keep up. (In fact, time spent on social media peaked in 2022.)  The obvious solution is to let AI view media for you and summarize what’s worth seeing. Some people may find that acceptable but I’d like to think that most will recognize that AI can’t be trusted to make the right choices – especially if the AI is run by corporations with interests of their own. The alternative would seem to be relying on human curators – remember “editors?” -- to assess the available materials and select those worth consuming. Of course, deciding which curators to trust will itself require consumers to do some work. This path also requires an alternative to the failing economics of traditional, ad-supported media, which will probably be consumer-paid subscriptions that compensate both the content creators and the curators. A middle ground might be AI curators that are funded by consumers rather than self-interested corporate providers.

I won’t pretend to know approach will dominate. Perhaps different groups will adopt different approaches, with access to accurate information becoming a luxury good like ad-free TV. For now, people can already be selective in their content consumption by avoiding the undifferentiated (or, more precisely, platform-selected) streams in news and social media feeds. The dilemma is sharper for experts relying on content creation to build their business: do they try to keep up with content bots that can easily outproduce them, recognizing that even consistently brilliant, unique, and insightful content won’t necessarily be noticed? Or do they find other ways to reach prospective clients, perhaps through live events or less mediated platforms like reddit and medium?

Mind you, this is occurring in a world where AI can legitimately replace some of the expertise consultants are used to selling, such as methodologies, technical skills, and vendor knowledge. This leads to smaller engagements where clients are buying only the AI oversight and creative insights that humans alone can provide. So does the total market for consulting services shrink at exactly the same time that selling those services becomes harder? Or, more optimistically, does lower cost enable consultants to expand the market by selling to firms that couldn’t previously afford their services? (See, for example, this new WPP offering of AI-enabled “self-service” ad campaign development.) While AI-powered services don’t themselves create more human jobs, they do presumably offer an opportunity to sell insight-based projects that do require paying humans.

I'll say it again: the future is unknown. But I do believe that analysts who are calmly assessing the threat that AI poses to other people’s jobs might give more urgent thought to the threat AI poses to their own.