Literal-minded creature that I am, yesterday’s discussion of organizing analysis tools around questions led me to consider changing my sample LTV system to open with a list of questions that the system can answer. Selecting a question would take you to the tab with the related information. (Nothing unique here—many systems do this.)
But I soon realized that things like “How much do revenue, marketing costs and product costs contribute to total value?” aren’t really what managers want to know. In fact, they really want just one button that answers the question, “How can I make more money?” This must look beyond past performance, and even beyond the factors that caused past performance, to identify opportunities for improvement.
You could argue that this is where human creativity comes in, and ultimately you’d be correct. But if we limit the discussion to marginal improvements within an existing structure, the process used to uncover opportunities is pretty well defined and can indeed be automated. It involves comparing the results of on-going efforts—things like different customer acquisition programs—and shifting investments from below-average to above-average performers.
Of course, it’s not trivial to get information on the results. You also have to estimate the incremental (rather than average) return on investments. But standard systems and formulas can do those sorts of things. They can also estimate the size of the opportunity represented by each change, so the system can prioritize the list of recommendations that the One Big Button returns.
Now, if the only thing managers have to do is push one button, why not automate the process entirely? Indeed, you may well do that in some situations. But here’s where we get back to human judgment.
It’s not just that systems sometimes recommend things that managers know are wrong. An automated forecast based on toy sales by week would predict incredible sales each January. Any human (at least in the U.S.) knows the pattern is seasonal. However, this isn’t such a big deal. Systems can be built to incorporate such factors and can be modified over time to avoid repeating an error.
The real reason you want humans involved is that looking at the recommendations and underlying data will generate new ideas and insights. A machine can only work within the existing structure, but a smart manager or analyst can draw inferences about what else might be worth trying. This will only happen if the manager sees the data.
I’m not saying any of the ideas I’ve just presented are new or profound. But they’re worth keeping in mind. For example, they apply to the question of whether Web targeting should be based on automated behavior monitoring or structured tests. (The correct answer is both—and make sure to look at the results of the automated systems to see if they suggest new tests.)
These ideas may also help developers of business intelligence and analytics systems understand how they can continue to add value, even after specialized features are assimilated into broader platforms. (I’m thinking here of acquisitions: Google/Urchin, Omniture/Touch Clarity, Oracle/Hyperion, SAP/Pilot, and so on.) Many analytical capabilities are rapidly approaching commodity status. In this world, only vendors who help answer the really important question—vendors who put something useful behind the One Big Button—will be able to survive.
No comments:
Post a Comment