Tuesday, July 07, 2015
my last post that I’ve started to think in terms of three realities: today (the next two years), tomorrow (two to five years out), and later (after five years). Like the famous New Yorker magazine cover that showed a detailed knowledge of Manhattan and increasingly vague view of more distant regions, our picture of the immediate future is much more nuanced than what happens farther out. One result is an apparent assumption that future technology will work much better than today’s technology – not because anyone really thinks that future technology will be perfect, but because we can’t see where its imperfections will appear.
I’ve been thinking about this because so my own predictions are premised on increasingly detailed knowledge about customers and prospects. Both the “madtech” vision of broad access to third-party data and the “robotech” vision of delegating decisions to machines assume that effectively complete data will be available about each customer. But a quick look at today’s data shows that is far from true. Here are some factoids I’ve been gathering to illustrate the point:
- 37% of mobile ad locations are accurate to within 100 meters (Thinknear)
- 30-55% match rates for B2C individual-level onboarding (LiveRamp)
- 16-29% match rates for B2B individual-level data enrichment: (Raab Associates client tests)
- 14% match rates and low predictive value for B2B account-level intent data: (Infer)
And this doesn’t even begin to address predictive modeling, where even a 10x lift vs average still implies many errors at the individual level.
Contemplating these results does give me pause. At some point, poor data means that theoretically possible approaches are not practical because of low coverage or insufficient performance. Those constraints won’t magically vanish in the future, even though they’re not visible at this distance.
Being a technology optimist, I assume that data will get better over time. But I can’t cite much evidence to support my optimism. If anything, the number of new data sources is outstripping improvements in existing sources. The true core challenge is identity resolution, which means associating data from different sources with the right individual profile. Cross-device matching is the current focus of this discussion but covers just part of the problem.
It’s a safe bet that perfect data won’t be available in two years or five years or probably ever. But the real question is whether enough good data will be available to support the futures I’ve been forecasting.
I think a realistic view is that some data will be more available than other data, and, as a result, some portions of the visions will happen while others do not. Customer data is likely to be richer than prospect data, since customers will grant permission to link with external data sources (or take actions that make linking easier even without their permission). Sharing among complementary companies – for example, airlines and hotels – will be easier to negotiate than sharing with anyone through public exchanges. Data about objects, such as cars or groceries or homes, should be less sensitive than data about individuals (even though there’s obviously a close relationship between objects and their owners). Data about public behaviors, such as travel and store visits, is less sensitive than data about private matters such as health care. (See this recent Altimeter Group report for more information on consumer attitudes to privacy.)
In short, the future will remain unevenly distributed, as William Gibson observed. Marketers and the technologists who support them need both the ideal vision of how things would work in a world of perfect data (which isn’t the same as a perfect world!) and the realistic understanding of what’s likely to be practical within their planning horizon. They can then aggressively pursue opportunities revealed by the vision without chasing chimeras that will never appear. This pursuit is essential: tomorrow always comes, but the future won’t happen by itself.