Wednesday, February 17, 2021

Is Peak Martech Approaching At Last?


Contrary to popular belief, forecasting is easy: tomorrow is nearly always like today. What’s hard is predicting when something will change: a snowstorm, stock market crash, or disruptive technology. Of course, predicting change is also where forecasting is most useful.

In marketing technology, we’ve seen a long succession of sunny days. Every year, the number of systems grows, fed by a proliferation of channels, declining development costs, and easily available funding. The safe bet is the number will grow next year, too. But I think one day soon the pendulum will reverse direction.

I can’t point to much data in support of my position. Surveys do show that marketers don’t use the full capabilities of their existing stacks, which might mean they’re inclined to take a break before making new purchases. But marketers have never used every feature in their old systems before buying new ones. The pandemic probably led to a temporary dip in martech purchases but budgets appear to be opening up again. So the appetite for new martech will likely reappear. 

(Update, August 2021: a Mulesoft survey released in August 2021 did show the average number of apps used by large enterprises had fallen from a peak of 1,020 in 2018 to 843 in 2021, and Gartner's July 2021 CMO Spend Survey reported a sharp drop in marketing spending from 11% of company revenue in 2020 to 6.4% in 2021, which would imply a drop in martech spending too.  I'm skeptical of the Gartner data but both studies support the idea that martech proliferation may decrease.)

My prediction is based less martech trends than a general impression that many people feel the world is spinning out of control and want to rein it in. Tech in particular is having impacts that no one fully understands. Concerns about disinformation, social media-induced radicalization, lost privacy, and biased artificial intelligence are all part of this. Even in the narrower spheres of martech and adtech, many users feel their systems are too complicated to really understand. Worries about ad fraud, ads appearing in the wrong places, inappropriate personalization, and unintended campaign messages all come down to the same thing: people worry their systems are making unseen bad decisions.

Technologists tend to feel the cure is more technology: smarter AI, systems checking on other systems, and democratized development to let more people build systems for themselves. But there’s an air of hubris to this. Stories from Daedalus to Frankenstein to Jurassic Park warn us advanced technology will ultimately destroy its creators. Every data breach and wifi outage reminds us no technology is entirely reliable and fixing it is beyond most people’s control.

As a result, non-technologists increasingly doubt that technology can solve its own problems. Some people will bury their worries, accepting technology’s risk as the price for its benefits. Others will take the opposite extreme, rejecting technology altogether, or at least to the great degree possible.

But there’s a middle ground between blindly surfing the net and leaving the grid entirely. This is to consciously seek technology that’s simpler and more controllable than current extremes, even if it’s also less powerful as a result. The key is willingness to make that trade-off, which in turn implies willingness to invest the effort needed to assess the relative value vs. risk of different technical options.

Making that investment is probably the biggest change from the current default of accepting technical progress as inevitable and trusting the technologists to appropriately balance risk against rewards when they decide which products to release. In many cases, the cost of assessment will probably be higher than the cost of using the diminished technology itself: that is, the difference in value between a more secure system and a less secure one may be less than the value of the time I spend comparing them. This means the main cost of making this adjustment isn’t the lost value from using safer technology, but the cost of assessing that technology.

In theory, the assessment cost might be reduced by splitting it among many people who would share their results. But here’s where trust comes back into play: if you can’t trust someone else to do accurate research, you can’t decide based on their results. Since loss of trust is arguably the defining crisis of today’s society, you can’t just wave it away with an assumption that people will trust others’ assessments of technology tradeoffs. Rather, the need will be to build technology that is self-evidently understandable, so that each person can assess it for herself. This will reduce the assessment cost that blocks them from choosing simpler solutions.

So here’s where I think we’re headed: away from ever-increasing, and increasingly opaque, technical complexity, and towards technology that’s simpler and more transparent. Remember: simplicity is the goal, and transparency is what makes it affordable. I call this the “new pragmatism”, although I doubt the label will catch on. As the word “pragmatism” suggests, it’s rather boring and a lot of hard work. But compared with the chaos or authoritarianism that seem to be the main alternatives, it’s about the best way we can hope our current chapter will end. After we turn the page, people may later learn to rebuild the presumption of trust that enables non-verifiable relationships.

If you’re still reading this, thanks for your indulgence; I know you don’t come to this blog for half-baked social theories. But these ideas do have direct implications for marketing and martech. If I’m right, both consumers and martech buyers will want simpler, more transparent products. For marketers, this means:

  • The time may finally have come when stripped-down versions replace feature-rich products, with a stress on ease of use rather than power. I know this idea has been tried before without success. But that was during the earlier age of techno-optimism.

  • Buyers may be more interested in products whose actual operation is transparent. This will usually mean status indicators, meters, and diagnostics to show’s happening. In some cases may literally mean see-through designs that let users watch, say, as the dishes are cleaned or the vacuum bag fills with dirt. Whatever it takes for a feeling of control.

  • Privacy will continue to gain importance, with particular emphasis on systems that are private by design rather than user choice. Privacy policies and options are poorly understood and mistrusted, so many consumers would rather buy a system that makes them unnecessary because it can’t collect data or connect to the internet. Of course, they need to be confident the system behaves as promised.

  • Marketing messages should also switch from promoting advanced technology to promoting simplicity, reliability, and clarity. Explanations about what’s inside a product, in terms of the technology, design and manufacturing processes, materials, and people may be more important to buyers looking for reasons to trust.

  • Marketing methods should match the claims, avoiding unnecessary personalization and staying away from mistrusted media. This is a tricky balance because few marketers will want to sacrifice the performance benefits that come from data-driven targeting. But they do need to weigh long-term brand value against short-term campaign results. For what it’s worth, relying more on basic branding and less on advanced technology is itself consistent with the return to simplicity.
  • Martech vendors will want to make all these adjustments in their own marketing. Other considerations include:


    • Artificial intelligence must be understandable. It’s tempting to suggest discarding AI altogether, since it may be the ultimate example of complicated, opaque, and ungovernable technology. But the apparent benefits of AI are too great to discard. The pragmatic approach is to demand proof that AI really delivers the expected benefits. Then, assuming the answer is yes, find ways to make AI more controllable. This means building AI systems that explain their results, let users modify their decisions, and make it easy to monitor their behaviors. These are already goals of current AI development, so this is more a matter of adjusting priorities than taking AI in a fundamentally different direction.

    • Reconsider the platform/app model. This may be blasphemy in martech circles, since the martech explosion has been largely the result of platforms making it easier to sell specialized apps. But the platform/app model relies on trust that apps are effectively vetted by platform owners. If that trust isn’t present, assessment costs will pose a major barrier to new app adoption. At best, buyers with limited resources (which is everyone) would be able to afford fewer apps. At worst, people will stop using apps altogether. So the pragmatic approach for platforms and app developers alike is to work even harder at trust-building. We already see this, for example, in Apple’s new requirements for data privacy labels and tracking consent rules. https://developer.apple.com/app-store/user-privacy-and-data-use/ What Apple hasn’t done is to aggressively audit compliance and publicize its audit programs. The dynamic here is that users will make more demands on platforms to prove they are trustworthy and will concentrate their purchases on platforms that succeed. Since selecting a platform carries its own assessment cost, we can expect users to deal with fewer platforms in total. This means the trend for every major vendor to develop its own platform ecosystem will reverse. Looking still further ahead: fewer platforms gives the remaining platforms have more bargaining power with the app developers, so we can expect higher acceptance standards (good) and higher fees (not so good). The ultimately is fewer app developers as well.

    • Rebirth of suites. That’s not quite the right label since suites never died. But the point is that buyers looking for simplicity and facing higher assessment costs will find suites more appealing than ever. Obviously, the suites themselves must meet the new standards for simplicity, value and transparency, so integrated-in-name-only Frankensuites don’t get a free pass. But once a buyer has decided a suite vendor is trusthworthy, it’s far more attractive to use a module from that suite than to assess and integrate a best-of-breed alternative. Less obvious but equally true: building a system in-house also becomes less attractive, since in-house developers will also need to prove that their products are effective and reliable. This will necessarily increase development costs, so the build/buy balance will be tilted a little more towards buying – especially if the assessment costs of buying are minimal because the purchased option is part of a trusted suite. It’s true that this doesn’t apply if companies require users to accept whatever their in-house developers deliver. But that doesn’t sound like a viable long-term approach in a world where the gap between poor in-house systems and good commercial products will be larger than ever.

    • Limits on citizen developers. If blasphemy comes in degrees, this takes me to the professional-grade, eternal-damnation level. On one hand, nothing is more trusted than something a citizen developer creates for herself: she certainly knows how it works and can build in whatever transparency and monitoring she sees fit. So the new-pragmatic world is likely to see more, not fewer, user-built systems. But if we’re learned anything from decades of using Excel, it’s that complex spreadsheets almost always contain hidden errors, are opaque to anyone except (maybe) the creator, and are exceedingly fragile when change is required. Other user-built solutions will inevitably have similar problems. So even if users trust whatever they’ve built for themselves, everyone else in the organization will, and should be, exceedingly cautious in accepting them. In other words, the assessment cost will be almost insurmountably high for all but the simplest citizen-developed applications. This puts a natural, and probably shrinking, limit on the ability of citizen-developed systems to replace commercial software or in-house systems built by professional developers. In practice, citizen development will be largely limited to personal productivity hacks and maybe some prototyping of skunkworks projects. This doesn’t mean that no-code and low-code tools are useless: they will certainly be productivity-enhancers for professional developers. Don’t sell those Airtable options just yet.

    I’ll caution again that the picture I’m drawing here is far from certain to develop. I could be wrong about the change in social direction – although the alternative of continued disintegration is ugly to contemplate. Even if I’m right about the big shift, I could be wrong about its exact impact marketing and martech. Still, I do believe that current trends cannot continue indefinitely and it’s worth considering what might happen after their limits are reached. So what I’ll suggest is this: keep an eye out for developments that fit the pattern I’m suggesting and be ready with suitable marketing and martech strategies if things move in that direction. 

    *                *                 *

    Addendum: The core argument of this post is “people feel the world is spinning out of control and trust will solve that problem”. That feels like a non sequitur, since it’s not obvious how trust creates control. It also feels uncomfortably hierarchical, and perhaps elitist, if “control” implies a central authority.  (Note: you might read “control” as referring to people controlling their own personal technology and data. But fully self-sovereign individuals can still cause chaos if there’s not some larger control framework to constrain their actions.)

    But it's not a non sequitur because there is in fact a clear relationship between trust and control. Specifically:

    • Trust can be defined as the belief that someone will act in the way you want them to
    • Control is a way to force someone to act in the way you want. 
    • Thus, trust and control are complementary: the greater trust you have in someone, the less control you need over them (to still ensure they act the way you want).
    Although power-hungry people might enjoy control for its own sake, most people will care only about achieving the desired result. So the solution to a world “spinning out of control” isn’t necessarily reinstating hierarchical, elite authority; it can also be generating trust.  Both yield the same outcome of predictable desired behaviors. 

    This applies in particular to the discussion of citizen development and no-code software, which seems to imply that applications can only be used by more than one person if there’s a central authority to coordinate and approve them.  This is where "governance" comes in.  It's correct that self-built software needs to meet certain standards to be safely used and shared.  But "governance" can be achieved either through control (a central authority enforces those standards) or through trust (convincing users to apply those standards by themselves).  Either approach can work but trust is clearly preferable.