Sunday, October 26, 2025

How Will GenAI Change the Consulting Industry?



Reports on the threat that generative AI poses to knowledge workers usually focus what might be called “structured creativity”: tasks like copywriting or computer programming where outputs are clearly specified through things like creative briefs or system requirements. One group that generally doesn’t consider itself at high risk are the analysts and consultants who write the reports themselves. After all, their work requires insights and judgements that are beyond the current capabilities of genAI, which ultimately does nothing more than play back the training data it's fed.

Case in point: This widely reported BCG study found that genAI tools substantially improved performance of consultants at product planning, with the greatest improvements among below-average performers. A second finding of the same study was less widely noted: AI depressed performance at generating insights from interviews and raw data. This dovetails nicely with the idea that AI can help with structured tasks such as campaign management but isn’t (yet) capable of developing new insights. One reason that AI struggles with insights is its well-known tendency to hallucinate, which would translate into false conclusions about the meaning of input data. (See, for example this BBC study that found 47% of AI chatbot answers contained significant errors about data sources or accuracy.)*

So it seems that the survey analyzers are indeed safe, at least for the time being. But while insights are an important part of the value provided by analysts and consultants, they’re just a small fraction of their work. In my experience, the main reasons that companies hire consultants are to access their skills at organizing a project and their knowledge of vendors to consider hiring.** Those are things that genAI does extremely well: ChatGPT will gladly provide a plausible project plan for nearly any task and will just as merrily answer your questions about which vendors to consider, providing detailed comparisons on relevant features. It's true that those comparisons will often be inaccurate (see the BBC study or do your own test in a field you know well). But, let’s face it, consultants also make mistakes about product capabilities, which change much faster than even the most diligent industry expert can keep up. So, whether a vendor list comes from genAI or a human consultant, it still has to be validated with current information before you base any decisions on it.

Even more to the point: just as it’s always been hard for clients to tell truly knowledgeable consultants from pretenders, it’s now equally hard to tell accurate genAI outputs from hallucinations. This applies to frameworks, vendor lists, and even those insights where humans are still clearly superior. GenAI is famously capable of providing reasonable-sounding arguments based on false premises, citing false evidence, and leading to wrong conclusions. A true optimist would argue those can lead to fruitful discussions even when they start with an error. A more realistic expectation is that some large fraction of people will accept them as delivered. 

This is especially true when it comes to comments on industry trends and futures, where there is no clearly right answer. A large part of the industry conversation today, especially in media like LinkedIn, is exactly that sort of discussion. AI-generated comments are increasingly common and, truth be told, can be hard to tell from the human-generated ones. Sure, that's a tough situation for industry members trying to get good information. But save some sympathy for the poor human consultants who are trying to be heard – and recognized as legitimate experts – amid all the noise. 

Indeed, the flood of content is arguably the biggest change in marketing in recent years. Not so long ago, publishing good content on a regular basis was an effective, low-budget path to success. The mechanism was simple: good content was picked up via social media sharing and, if it became popular enough, registered in search engine results. Today, not only is there too much content for anyone who’s not already famous to gain a significant share of attention, but AI search summaries will scoop up good information without leading readers back to the source. The alternative path of publishing on high-traffic news sites is equally blocked since AI search engines stifle traffic to those sites as well. Of course, a handful of influencers who are skilled at attracting attention can still become Internet-famous.  But what gains attention isn’t necessarily what’s of true value.***

I’ll skip the usual moaning about bad information driving out the good, since the consequences are all around us. Instead, I’ll posit that the situation is untenable because people simply will not be willing to consume such huge volumes of content indefinitely. There is already a backlash among people who want to reduce the time they spend trying to keep up. (In fact, time spent on social media peaked in 2022.)  The obvious solution is to let AI view media for you and summarize what’s worth seeing. Some people may find that acceptable but I’d like to think that most will recognize that AI can’t be trusted to make the right choices – especially if the AI is run by corporations with interests of their own. The alternative would seem to be relying on human curators – remember “editors?” -- to assess the available materials and select those worth consuming. Of course, deciding which curators to trust will itself require consumers to do some work. This path also requires an alternative to the failing economics of traditional, ad-supported media, which will probably be consumer-paid subscriptions that compensate both the content creators and the curators. A middle ground might be AI curators that are funded by consumers rather than self-interested corporate providers.

I won’t pretend to know approach will dominate. Perhaps different groups will adopt different approaches, with access to accurate information becoming a luxury good like ad-free TV. For now, people can already be selective in their content consumption by avoiding the undifferentiated (or, more precisely, platform-selected) streams in news and social media feeds. The dilemma is sharper for experts relying on content creation to build their business: do they try to keep up with content bots that can easily outproduce them, recognizing that even consistently brilliant, unique, and insightful content won’t necessarily be noticed? Or do they find other ways to reach prospective clients, perhaps through live events or less mediated platforms like reddit and medium?

Mind you, this is occurring in a world where AI can legitimately replace some of the expertise consultants are used to selling, such as methodologies, technical skills, and vendor knowledge. This leads to smaller engagements where clients are buying only the AI oversight and creative insights that humans alone can provide. So does the total market for consulting services shrink at exactly the same time that selling those services becomes harder? Or, more optimistically, does lower cost enable consultants to expand the market by selling to firms that couldn’t previously afford their services? (See, for example, this new WPP offering of AI-enabled “self-service” ad campaign development.) While AI-powered services don’t themselves create more human jobs, they do presumably offer an opportunity to sell insight-based projects that do require paying humans.

I'll say it again: the future is unknown. But I do believe that analysts who are calmly assessing the threat that AI poses to other people’s jobs might give more urgent thought to the threat AI poses to their own.