Friday, August 25, 2017

Self-Driving Marketing Campaigns: Possible But Not Easy


A recent Forrester study found that most marketers expect artificial intelligence to take over the more routine parts of their jobs, allowing them to focus on creative and strategic work.


That’s been my attitude as well. More precisely, I see AI enabling marketers to provide the highly tailored experiences that customers now demand. Without AI, it would be impossible to make the number of decisions necessary to do this. In short, complexity is the problem, AI is the solution, and we all get Friday afternoons off. Happy ending.

But maybe it's not so simple.

Here’s the thing: we all know that AI works because it can learn from data. That lets it make the best choice in each situation, taking into account many more factors than humans can build into conventional decision rules. We also all know that machines can automatically adjust their choices as they learn from new data, allowing them to continuously adapt to new situations.

Anyone who's dug a bit deeper knows two more things:

  • self-adjustment only works in circumstances similar to the initial training conditions. AI systems don’t know what to do when they’re faced with something totally unexpected. Smart developers build their systems to recognize such situations, alert human supervisors, and fail gracefully by taking an action that is likely to be safe. (This isn’t as easy as it sounds: a self-driving car shouldn’t stop in the middle of an intersection when it gets confused.)

  • AI systems of today and the near future are specialists. Each is trained to do a specific task like play chess, look for cancer in an X-ray, or bid on display ads. This means that something like a marketing campaign, which involves many specialized tasks, will require cooperation of many AIs. That’s not new: most marketing work today is done by human specialists, who also need to cooperate. But while cooperation comes naturally to (most) humans, it needs to be purposely added as a skill to an AI.*

By itself, this more nuanced picture isn’t especially problematic. Yes, marketers will need multiple AIs and those AIs will need to cooperate. Maintaining that cooperation will be work but presumably can itself eventually be managed by yet another specialized AI.

But let’s put that picture in a larger context.

The dominant feature of today’s business environment is accelerating change. AI itself is part of that change but there are other forces at play: notably, the “personal network effect” that drives companies like Facebook, Google, and Amazon to hoard increasing amounts of data about individual consumers. These forces will impose radical change on marketers’ relations with customers. And radical change is exactly what the marketers’ AI systems will be unable to handle.

So now we have a problem. It’s easy – and fun – to envision a complex collection of AI-driven components collaborating to create fully automated, perfectly personalized customer experiences. But that system will be prone to frequent failures as one or another component finds itself facing conditions it wasn’t trained to handle. If the systems are well designed (and we’re lucky), the components will shut themselves down when that happens. If we’re not so lucky, they’ll keep running and return increasingly inappropriate results. Yikes.

Where do we go from here? One conclusion would be that there’s a practical limit to how much of the marketing process can really be taken over by AI. Some people might find that comforting, at least for job security. Others would be sad.

A more positive conclusion is it’s still possible to build a completely AI-driven marketing process but it’s going to be harder than we thought. We’ll need to add a few more chores to the project plan:

  • build a coordination framework. We need to teach the different components to talk to each other, preferably in a language that humans can understand. They'll have to share information about what they’re doing and about the results they’re getting, so each component can learn from the experience of the others and can see the impact its choices have elsewhere.  It seems likely there will be an AI dedicated specifically to understanding and predicting those impacts throughout the system. Training that AI will be especially challenging. In keeping with the new tradition of naming AIs after famous people, let's call this one John Wanamaker. 

  • learn to monitor effectively. Someone has to keep an eye on the AIs to make sure they’re making good choices and otherwise generally functioning correctly. Each component needs to be monitored in its own terms and the coordination framework needs to be monitored as a whole. Yes, an AI could do that but it would be dangerous to remove humans from the loop entirely. This is one reason it’s important the coordination language be human-friendly.  Fortunately, result monitoring is a concern for all AI systems, so marketers should be able to piggyback on solutions built elsewhere. At the risk of seeming overly paranoid, I'd suggest the monitoring component be kept as separate as possible from the rest of the system.

  • build swappable components.  Different components will become obsolete or need retraining at different times, depending on when changes happen in the particular bits of marketing that they control. So we need to make it easy to take any given component offline or to substitute a new one. If we’ve built our coordination framework properly, this should be reasonably doable. Similarly, a proper framework will make it easy to inject new components when necessary: say, to manage a new output channel or take advantage of a new data source.  (This is starting to sound more like a backbone than a framework.  I guess it's both.)  There will be considerable art in deciding how what work to assign to a single component and what to split among different components. 

  • gather lots of data.  More data is almost always better, but there's a specific reason to do this for AI: when things change you might need data you didn’t need before, and you’ll be able to retrain your system more quickly if you’ve been capturing that data all along. Remember that AI is based on training sets, so building new training sets is a core activity.  The faster you can build new training sets the faster your systems will be back to functioning effectively. This makes it worth investing in data that has no immediate use. Of course, it may also turn out that deeper analysis finds new uses for data even when there hasn’t been a fundamental change. So storing lots of data would be useful for AI even in a stable world.

  • be flexible, be agile, expect the unexpected, look out black swans, etc.  This is the principle underlying all the previous items, but it's worth stating explicitly because there are surely other methods I haven't listed. If there’s a true black swan event – unpredictable, rare, and transformative – you might end up scrapping your system entirely. That, in itself, is a contingency to plan for. But you can also expect lots of smaller changes and want your system to be robust while giving up as little performance as possible during periods of stability.

Are there steps you should take right now to get ready for the AI-driven future? You betcha. I’ll be talking about them at the MarTech Conference in Boston in October.  I hope you’ll be there!


____________________________________________________________________________________
*Of course, separate AIs privately cooperating with each other is also the stuff of nightmares. But the story that Facebook shut down a chatbot experiment when the chatbots developed their own language is apparently overblown.**

** On the other hand, the Facebook incident was the second time in the past year that AIs were reported to have created a private language.  And that’s just what I found on the first page of Google search. Who knows what the Google search AI is hiding????

No comments:

Post a Comment