Friday, October 05, 2007

Analytica Provides Low-Cost, High-Quality Decision Models

My friends at DM News, which has published my Software Review column for the past fifteen years, unceremoniously informed me this week that they had decided to stop carrying all of their paid columnists, myself included. This caught me in the middle of preparing a review of Lumina Analytica, a pretty interesting piece of simulation modeling software. Lest my research go to waste, I’ll write about Analytica here.

Analytica falls into the general class of software used to build mathematical models of systems or processes, and to then predict the results of a particular set of inputs. Business people typically use such software to understand the expected results of projects such as a new product launch or a marketing campaign, or to forecast the performance of their business as a whole. They can also be used to model the lifecycle of a customer or to calculate the results of key performance indicators as linked on a strategy map, if the relationships among those indicators have been defined with sufficient rigor.

When the relationships between inputs and outputs are simple, such models can be built in a spreadsheet. But even moderately complex business problems are beyond what a spreadsheet can reasonably handle: they have too many inputs and outputs and the relationships among these are too complicated. Analytica makes it relatively easy to specify these relationships by drawing them on an “influence diagram” that looks like a typical flow chart. Objects within the chart, representing the different inputs and outputs, can then be opened up to specify the precise mathematical relationships among the elements.

Analytica can also build models that run over a number of time periods, using results from previous periods as inputs to later periods. You can do something like this in a spreadsheet, but it takes a great many hard-coded formulas which are easy to get wrong and hard to change. Analytica also offers a wealth of tools for dealing with uncertainties, such as many different types of probability distributions. These are virtually impossible to handle in a normal spreadsheet model.

Apart from spreadsheets, Analytica fits between several niches in the software world. Its influence diagrams resemble the pictures drawn by graphics software: but unlike simple drawing programs, Analytica has actual calculations beneath its flow charts. On the other hand, Analytica is less powerful than process modeling software used to simulate manufacturing systems, call center operations, or other business processes. That software has many very sophisticated features tailored to modeling such flows in detail: for example, ways to simulate random variations in the arrival rates of telephone calls or to incorporate the need for one process step to wait until several others have been completed. It may be possible to do much of this in Analytica, but it would probably be stretching the software beyond its natural limits.

What Analytica does well is model specific decisions or business results over time. The diagram-building approach to creating models is quite powerful and intuitive, particularly because users can build modules within their models, so a single object on a high-level diagram actually refers to a separate, detailed diagram of its own. Object attributes include inputs, outputs and formulas describing how the outputs are calculated. Objects can also contain arrays to handle different conditions: for example, a customer object might use arrays to define treatments for different customer segments. This is a very powerful feature, since its lets an apparently simple model capture a great deal of actual detail.

Setting up a model in Analytica isn’t exactly simple, although it may be about as easy as possible given the inherent complexity of the task. Basically, users place the objects on a palette, connect them with arrows, and then open them up to define the details. There are many options within these details, so it does take some effort to learn how to get what you want. The vendor provides a tutorial and detailed manual to help with the learning process, and offers a variety of training and consulting options. Although it is accessible to just about anyway, the system is definitely oriented towards sophisticated users, providing advanced statistical features and methods that no one else would understand.

The other intriguing feature of Analytica is its price. The basic product costs a delightfully reasonable $1,295. Other versions range up to $4,000 including ability to access ODBC data sources, handle very large arrays, and run automated optimization procedures. A server-based version costs $8,000, but only very large companies would need that one.

This pricing is quite impressive. Modeling systems can easily cost tens or hundreds of thousands of dollars, and it’s not clear they provide much more capability than Analytica. On the other hand, Analytica’s output presentation is rather limited—some basic tables and graphs, plus several statistical measures of uncertainty. There’s that statistical orientation again: as a non-statistician, I would have preferred better visualization of results.

In my own work, Analytica could definitely provide a tool for building models to simulate customers’ behaviors as they flow through an Experience Matrix. This is already more than a spreadsheet can handle, and although it could be done in QlikTech it would be a challenge. Similarly, Analytica could be used in business planning and simulation. It wouldn’t be as powerful as a true agent-based model, but could provide an alternative that costs less and is much easier to learn how to build. If you’re in the market for this sort of modeling—particularly if you want to model uncertainties and not just fixed inputs—Analytica is definitely worth a look.

Tuesday, October 02, 2007

Marketing Performance Measurement: No Answers to the Really Tough Questions

I recently ran a pair of two-day workshops on marketing performance measurement. My students had a variety of goals, but the two major ones they mentioned were the toughest issues in marketing: how to allocate resources across different channels and how to measure the impact of marketing on brand value.

Both questions have standard answers. Channel allocation is handled by marketing mix models, which analyze historical data to determine the relative impact of different types of spending. Brand value is measured by assessing the important customer attitudes in a given market and how a particular brand matches those attitudes.

Yet, despite my typically eloquent and detailed explanations, my students found these answers unsatisfactory. Cost was one obstacle for most of them; lack of data was another. They really wanted something simpler.

I’d love to report I gave it to them, but I couldn't. I had researched these topics thoroughly as preparation for the workshops and hadn’t found any alternatives to the standard approaches; further research since then still hasn’t turned up anything else of substance. Channel allocation and brand value are inherently complex and there just are no simple ways to measure them.

The best I could suggest was to use proxy data when a thorough analysis is not possible due to cost or data constraints. For channel allocation, the proxy might be incremental return on investment by channel: switching funds from low ROI to high ROI channels doesn’t really measure the impact of the change in marketing mix, but it should lead to an improvement in the average level of performance. Similarly, surveys to measure changes in customer attitudes toward a brand don’t yield a financial measure of brand value, but do show whether it is improving or getting worse. Some compromise is unavoidable here: companies not willing or able to invest in a rigorous solution must accept that their answers will be imprecise.

This round of answers was little better received than the first. Even ROI and customer attitudes are not always available, and they are particularly hard to measure in multi-channel environments where the result of a particular marketing effort cannot easily be isolated. You can try still simpler measures, such as spending or responses for channel performance or market share for brand value. But these are so far removed from the original question that it’s difficult to present them as meaningful answers.

The other approach I suggested was testing. The goal here is to manufacture data where none exists, thereby creating something to measure. This turned out to be a key concept throughout the performance measurement discussions. Testing also shows that marketers are at least doing something rigorous, thereby helping satisfy critics who feel marketing investments are totally arbitrary. Of course, this is a political rather than analytical approach, but politics are important. The final benefit of testing is it gives a platform for continuous improvement: even though you may not know the absolute value of any particular marketing effort, a test tells whether one option or another is relatively superior. Over time, this allows a measurable gain in results compared with the original levels. Eventually it may provide benchmarks to compare different marketing efforts against each other, helping with both channel allocation and brand value as well.

Even testing isn’t always possible, as my students were quick to point out. My answer at that point was simply that you have to seek situations where you can test: for example, Web efforts are often more measurable than conventional channels. Web results may not mirror results in other channels, because Web customers may themselves be very different from the rest of the world. But this again gets back to the issue of doing the best with the resources at hand: some information is better than none, so long as you keep in mind the limits of what you’re working with.

I also suggested that testing is more possible than marketers sometimes think, if they really make testing a priority. This means selecting channels in part on the basis of whether testing is possible; designing programs so testing is built in; and investing more heavily in test activities themselves (such as incentives for survey participants). This approach may ultimately lead to a bias in favor of testable channels—something that seems excessive at first: you wouldn’t want to discard an effective channel simply because you couldn’t test it. But it makes some sense if you realize that testable channels can be improved continuously, while results in untestable channels are likely to stagnate. Given this dynamic, testable channels will sooner or later become more productive than untestable channels. This holds even if the testable channels are less efficient at the start.

I offered all these considerations to my students, and may have seen a few lightbulbs switch on. It was hard to tell: by the time we had gotten this far into the discussion, everyone was fairly tired. But I think it’s ultimately the best advice I could have given them: focus on testing and measuring what you can, and make the best use possible of the resulting knowledge. It may not directly answer your immediate questions, but you will learn how to make the most effective use of your marketing resources, and that’s the goal you are ultimately pursuing.

Sunday, September 16, 2007

Tableau Software Makes Good Visualization Easy

I took a close look recently at Tableau data visualization software. I liked Tableau a lot, even though it wasn’t quite what I expected. I had thought of it as a way to build aesthetically-correct charts, according to the precepts set down by Edward Tufte and like-minded visualization gurus such as Stephen Few. But even though Tableau follows many of these principles, it is less for building charts than interactive data exploration.

This is admittedly a pretty subtle distinction, since the exploration is achieved through charts. What I mean is that Tableau is designed to make it very easy to see the results of changing one data element at a time, for example to find whether a particular variable helps to predict an outcome. (That’s a little vague: the example that Tableau uses is analyzing the price of a condominium, and adding variables like square footage, number of rooms, number of baths, location, etc. to see if they explain differences in the sales price.) What makes Tableau special is it automatically redraws the graphs as the data changes, often producing a totally different format, The formats are selected according to the fore-mentioned visualization theories, and for the most part are quite effective.

It may be worth diving a bit deeper into those visualization techniques, although I don’t claim to be an expert. You’ve probably heard some of the gurus’ common criticisms: ‘three dimensional’ bars that don’t mean anything; pie charts and gauges that look pretty but show little information given the space they take up; radar charts that are fundamentally incomprehensible. The underlying premise is that humans are extremely good at finding patterns in visual data, so that is what charts should be used for—not to display specific information, which belongs in tables of numbers. Building on this premise, research shows that people find patterns more easily in certain types of displays: shapes, graduations of color, and spatial relationships work well, but not reading numbers, making subtle size comparisons (e.g., slices in a pie chart), or looking up colors in a key. This approach also implies avoiding components that convey no information, such as the shadows on those ‘3-d’ bar charts, since these can only distract from pattern identification.

In general, these principles work well, although I have trouble with some of the rules that result. For example, grids within charts are largely forbidden, on the theory that charts should only show relative information (patterns) and you don’t need a grid to know whether one bar is higher than another. My problem with that one, in fact, it’s often difficult to compare two bars that are not immediately adjacent, and a grid can help. A grid can also provide a useful reference point, such as showing ‘freezing’ on a temperature chart. The gurus might well allow grid lines in some of those circumstances.

On the other hand, the point about color is very well taken. Americans and Europeans often use red for danger and green for good, but there is nothing intuitive about those—they depend on cultural norms. In China, red is a positive color. Worse, the gurus point out, a significant portion of the population is color-blind and can’t distinguish red from green anyway. They suggest that color intensity is a better way to show gradations, since people naturally understand a continuum from light to dark (even though it may not be clear which end of the scale is good or bad). They also suggest muted rather than bright colors, since it’s easier to see subtle patterns when there is less color contrast. In general, they recommend against using color to display meaning (say, to identify regions on a bar chart) because it takes conscious effort to interpret. Where different items must be shown on the same chart, they would argue that differences in shape are more easily understood.

As I say, Tableau is consistent with these principles, although it does let users make other choices if they insist. There is apparently some very neat technology inside Tableau that builds the charts using a specification language rather than conventional configuration parameters. But this is largely hidden from users, since the graphs are usually designed automatically. It may have some effect on how easily the system can switch from one format to another, and on the range of display options.

The technical feature that does impact Tableau users is its approach to data storage. Basically, it doesn’t have one: that is, it relies on external data stores to hold the information it requires, and issues queries against those sources as required. This was a bit of a disappointment to me, since it means Tableau’s performance really relies on the external systems. Not that that’s so terrible—you could argue (as Tableau does) that this avoids loading data into a proprietary format, making it easier to access the information you need without pre-planning. But it also means that Tableau can be painfully slow when you’re working with large data sets, particularly if they haven’t been optimized for the queries you’re making. In a system designed to encourage unplanned “speed of thought” data exploration, I consider this a significant drawback.

That said, let me repeat that I really liked Tableau. Query speed will be an issue in only some situations. Most of the time, Tableau will draw the required data into memory and work with it there, giving near-immediate response. And if you really need quick response from a very large database, technical staff can always apply the usual optimization techniques. For people with really high-end needs, Tableau already works with the Hyperion multidimensional database and is building an adapter for the Netezza high speed data appliance.

Of course, looking at Tableau led me to compare it with QlikTech. This is definitely apples to oranges: one is a reporting system and the other is a data exploration tool; one has its own database and the other doesn’t. I found that with a little tweaking I could get QlikView to produce many of the same charts as Tableau, although it was certainly more work to get there. I’d love to see the Tableau interface connected with the QlikView data engine, but suspect the peculiarities of both systems make this unlikely. (Tableau queries rely on advanced SQL features; QlikView is not a SQL database.) If I had to choose just one, I would pick the greater data access power and flexibility of QlikTech over the easy visualizations of Tableau. But Tableau is cheap enough—$999 to $1,799 for a single user license, depending on the data sources permitted—that I see no reason most people who need them couldn’t have both.

Thursday, August 30, 2007

Marketing Performance Involves More than Ad Placement

I received a thoughtful e-mail the other day suggesting that my discussion of marketing performance measurement had been limited to advertising effectiveness, thereby ignoring the other important marketing functions of pricing, distribution and product development. For once, I’m not guilty as charged. At a minimum, a balanced scorecard would include measures related to those areas when they were highlighted as strategic. I’d further suggest that many standard marketing measures, such as margin analysis, cross-sell ratios, and retail coverage, address those areas directly.

Perhaps the problem is that so many marketing projects are embedded in advertising campaigns. For example, the way you test pricing strategies is to offer different prices in the marketplace and see how customers react. Same for product testing and cross-sales promotions. Even efforts to improve distribution are likely to boil down to campaigns to sign up new dealers, training existing ones, distribute point of sale materials, and so on. The results will nearly always be measured in terms of sales results, exactly as you measure advertising effectiveness.

In fact, since everything is measured through advertising it and recording the results, the real problem may be how to distinguish “advertising” from the other components of the marketing mix. In classic marketing mix statistical models, the advertising component is representing by ad spend, or some proxy such as gross rating points or market coverage. At a more tactical level, the question is the most cost-effective way to reach the target audience, independent of the message content (which includes price, product and perhaps distribution elements, in addition to classic positioning). So it does make sense to measure advertising effectiveness (or, more precisely, advertising placement effectiveness) as a distinct topic.

Of course, marketing does participate in activities that are not embodied directly in advertising or cannot be tested directly in the market. Early-stage product development is driven by market research, for example. Marketing performance measurement systems do need to indicate performance in these sorts of tasks. The challenge here isn’t finding measures—things like percentage of sales from new products and number of research studies completed (lagging and leading indicators, respectively) are easily available. Rather, the difficulty is isolating the contribution of “marketing” from the contribution of other departments that also participate in these projects. I’m not sure this has a solution or even needs one: maybe you just recognize that these are interdisciplinary teams and evaluate them as such. Ultimately we all work for the same company, eh? Now let’s sing Kumbaya.

In any event, I don’t see a problem using standard MPM techniques to measure more than advertising effectiveness. But it’s still worth considering the non-advertising elements explicitly to ensure they are not overlooked.

Monday, August 06, 2007

What Makes QlikTech So Good: A Concrete Example

Continuing with Friday’s thought, it’s worth giving a concrete example of what QlikTech makes easy. Let’s look at the cross-sell report I mentioned on Thursday.

This report answers a common marketing question: which products do customers tend to purchase together, and how do customers who purchase particular combinations of products behave? (Ok, two questions.)

The report this begins with a set of transaction records coded with a Customer ID, Product ID, and Revenue. The trick is to identify all pairs among these records that have the same Customer ID. Physically, the resulting report is a matrix with products both as column and row headings. Each cell will report on customers who purchased the pair of products indicated by the row and column headers. Cell contents will be the number of customers, number of purchases of the product in the column header, and revenue of those purchases. (We also want row and column totals, but that’s a little complicated so let’s get back to that later.)

Since each record relates to the purchase of a single product, a simple cross tab of the input data won’t provide the information we want. Rather, we need to first identify all customers who purchased a particular product and group them on the same row. Columns will then report on all the other products they purchased.

Conceptually, Qlikview and SQL do this in roughly the same way: build a list of existing Customer ID / Product ID combinations, use this list to select customers for each row, and then find all transactions associated with those customers. But the mechanics are quite different.

In QlikView, all that’s required is to extract a copy of the original records. This keeps the same field name for Customer ID so it can act as a key relating to the original data, but renames Product ID as Master Product so it can treated as an independent dimension. The extract is done in a brief script that loads the original data and creates the other table from it:

Columns: // this is the table name
load
Customer_ID,
Product_ID,
Revenue
from input_data.csv (ansi, txt, delimiter is ',', embedded labels); // this code will be generated by a wizard

Rows: // this is the table name
load
Customer_ID,
Product_ID as Master_Product
resident Columns;

After that, all that’s needed is to create a pivot table report in the QlikView interface by specifying the two dimensions and defining expressions for the cell contents: count (distinct Customer ID); count (Product ID), and sum(Revenue). QlikView automatically limits the counts to the records qualified for each cell by the dimension definitions.

SQL takes substantially more work. The original extract is similar, creating a table with Customer ID and Master Product. But more technical skill is needed: the user must know to use a “select distinct” command to avoid creating multiple records with the same Customer ID / Product ID combination. Multiple records would result in duplicate rows, and thus double-counting, when the list is later joined back to the original transactions. (QlikView gives the same, non-double-counted results whether or not “select distinct” is used to create its extract.)

Once the extract is created, SQL requires the user to create a table with records for the report. This must contain two records for each transaction: one where the original product is the Master Product, and other where it is the Product ID. This requires a left join (I think) of the extract table against the original transaction table: again, the user needs enough SQL skill to know which kind of join is needed and how to set it up.

Next, the SQL user must create the report values themselves. We’ve now reached the limits of my own SQL skills, but I think you need two selections. The first is a “group by” on the Master Product, Product ID and Customer ID fields for the customer counts. The second is another “group by” on just the Master Product and Product ID for the product counts and revenue. Then you need to join the customer counts back to the more summarized records. Perhaps this could all be done in a single pass, but, either way, it’s pretty trickly.

Finally, the SQL user must display the final results in a report. Presumably this would be done in a report writer that hides the technical details from the user. But somebody skilled will still need to set things up the first time around.

I trust it’s clear how much easier it will be to create this report in QlikView than SQL. QlikView required one table load and one extract. SQL required one table load, one extract, one join to create the report records, and one to three additional selects to create the final summaries. Anybody wanna race?

But this is a very simple example that barely scratches the surface of what users really want. For example, they’ll almost certainly ask to calculate Revenue per Customer. This will be simple for QlikTech: just add a report expression of sum(Revenue) / count(distinct Customer_ID). (Actually, since QlikView lets you name the expressions and then use the names in other expressions, the formula would probably be something simpler still, like “Revenue / CustomerCount”.) SQL will probably need another data pass after the totals are created to do the calculation. Perhaps a good reporting tool will avoid this or at least hide it from the user. But the point is that QlikTech lets you add calculations without any changes to the files, and thus without any advance planning.

Another thing users are likely to want is row and column totals. These are conceptually tricky because you can’t simply add up the cell values. For the row totals, the same customer may appear in multiple columns, so you need to eliminate those duplicates to get correct values for customer count and revenue per customer. For the column totals, you need to remove transactions that appear on two rows (one where they are the Master Product, and other where they are the Product_ID). QlikTech automatically handles both situations because it is dynamically calculating the totals from the original data. But SQL created several intermediate tables, so the connection to the original data is lost. Most likely, SQL will need another set of selections and joins to get the correct totals.

QlikTech’s approach becomes even more of an advantage when users start drilling into the data. For example, they’re likely to select transactions related to particular products or on unrelated dimensions such as customer type. Again, since it works directly from the transaction details, QlikVeiw will instantly give correct values (including totals) for these subsets. SQL must rerun at least some of its selections and aggregations.

But there’s more. When we built the cross sell report for our client, we split results based on the number of total purchases made by each customer. We did this without any file manipulation, by adding a “calculated dimension” to the report: aggr(count (Product ID), Customer ID). Admittedly, this isn’t something you’d expect a casual user to know, but I personally figured it out just looking at the help files. It’s certainly simpler than how you’d do it in SQL, which is probably to count the transactions for each customer, post the resulting value on the transaction records or a customer-level extract file, and rebuild the report.

I could go on, but hope I’ve made the point: the more you want to do, the greater the advantage of doing it in QlikView. Since people in the real world want to do lots of things, the real world advantage of QlikTech is tremendous. Quod Erat Demonstrandum.

(disclaimer: although Client X Client is a QlikTech reseller, contents of this blog are solely the responsibility of the author.)

Friday, August 03, 2007

What Makes QlikTech So Good?

To carry on a bit with yesterday’s topic—QlikTech fascinates me on two levels: first, because it is such a powerful technology, and second because it’s a real-time case study in how a superior technology penetrates an established market. The general topic of diffusion of innovation has always intrigued me, and it would be fun to map QlikView against the usual models (hype curve, chasm crossing, tipping point, etc.) in a future post. Perhaps I shall.

But I think it’s important to first explain exactly just what makes QlikView so good. General statements about speed and ease of development are discounted by most IT professionals because they’ve heard them all before. Benchmark tests, while slightly more concrete, are also suspect because they can be designed to favor whoever sponsors them. User case studies may be most convincing evidence, but resemble the testimonials for weight-loss programs: they are obviously selected by the vendor and may represent atypical cases. Plus, you don’t know what else was going on that contributed to the results.

QlikTech itself has recognized all this and adopted “seeing is believing” as their strategy: rather than try to convince people how good they are, they show them with Webinars, pre-built demonstrations, detailed tutorials, documentation, and, most important, a fully-functional trial version. What they barely do is discuss the technology itself.

This is an effective strategy with early adopters, who like to get their hands dirty and are seeking a “game changing” improvement in capabilities. But while it creates evangelists, it doesn’t give them anything beyond than own personal experience to testify to the product’s value. So most QlikTech users find themselves making exactly the sort of generic claims about speed and ease of use that are so easily discounted by those unfamiliar with the product. If the individual making the claims has personal credibility, or better still independent decision-making authority, this is good enough to sell the product. But if QlikTech is competing against other solutions that are better known and perhaps more compatible with existing staff skills, a single enthusiastic advocate may not win out—even though they happen to be backed by the truth.

What they need is a story: a convincing explanation of WHY QlikTech is better. Maybe this is only important for certain types of decision-makers—call them skeptics or analytical or rationalists or whatever. But this is a pretty common sort of person in IT departments. Some of them are almost physically uncomfortable with the raving enthusiasm that QlikView can produce.

So let me try to articulate exactly what makes QlikView so good. The underlying technology is what QlikTech calls an “associative” database, meaning data values are directly linked with related values, rather than using the traditional table-and-row organization of a relational database. (Yes, that’s pretty vague—as I say, the company doesn’t explain it in detail. Perhaps their U.S. Patent [number 6,236,986 B1, issued in 2001] would help but I haven’t looked. I don’t think QlikTech uses “associative” in the same way as Simon Williams of LazySoft, which is where Google and Wikipedia point go when you query the term.)

Whatever the technical details, the result of QlikTech’s method is that users can select any value of any data element and get a list of all other values on records associated with that element. So, to take a trivial example, selecting a date could give a list of products ordered on that date. You could do that in SQL too, but let’s say the date is on a header record while the product ID is in a detail record. You’d have to set up a join between the two—easy if you know SQL, but otherwise inaccessible. And if you had a longer trail of relations the SQL gets uglier: let’s say the order headers were linked to customer IDs which were linked to customer accounts which were linked to addresses, and you wanted to find products sold in New Jersey. That’s a whole lot of joining going on. Or if you wanted to go the other way: find people in New Jersey who bought a particular product. In QlikTech, you simply select the state or the product ID, and that’s that.

Why is this a big deal? After all, plenty of SQL-based tools can generate that query for non-technical users who don’t know SQL. But those tools have to be set up by somebody, who has to design the database tables, define the joins, and very likely specify which data elements are available and how they’re presented. That somebody is a skilled technician, or probably several technicians (data architects, database administrators, query builders, etc.). QlikTech needs none of that because it’s not generating SQL code to begin with. Instead, users just load the data and the system automatically (and immediately) makes it available. Where multiple tables are involved, the system automatically joins them on fields with matching names. So, okay, someobody does need to know enough to name the fields correctly – but that’s just all the skill required..

The advantages really become apparent when you think about the work needed to set up a serious business intelligence system. The real work in deploying a Cognos or BusinessObjects is defining the dimensions, measures, drill paths, and so on, so the system can generate SQL queries or the prebuilt cubes needed to avoid those queries. Even minor changes like adding a new dimension are a big deal. All that effort simply goes away in QlikTech. Basically, you load the raw data and start building reports, drawing graphs, or doing whatever you need to extract the information you want. This is why development time is cut so dramatically and why developers need so little training.

Of course, QlikView’s tools for building reports and charts are important, and they’re very easy to use as well (basically all point-and-click). But that’s just icing on the cake—they’re not really so different from similar tools that sit on top of SQL or multi-dimensional databases.

The other advantages cited by QlikTech users are speed and scalability. These are simpler to explain: the database sits in memory. The associative approach provides some help here, too, since it reducing storage requirements by removing redundant occurrences of each data value and by storing the data as binary codes. But the main reason QlikView is incredibly fast is that the data is held in memory. The scalability part comes in with 64 bit processors, which can address pretty much any amount of memory. It’s still necessary to stress that QlikView isn’t just putting SQL tables into memory: it’s storing the associative structures, with all their ease of use advantages. This is an important distinction between QlikTech and other in-memory systems.

I’ve skipped over other benefits of QlikView; it really is a very rich and well thought out system. Perhaps I’ll write about them some other time. The key point for now is that people need to understand QlikView using a fundamentally different database technology, one that hugely simplifies application development by making the normal database design tasks unnecessary. The fantastic claims for QlikTech only become plausible once you recognize that this difference is what makes them possible.

(disclaimer: although Client X Client is a QlikTech reseller, they have no responsibility for the contents of this blog.)

Thursday, August 02, 2007

Notes from the QlikTech Underground

You may have noticed that I haven’t been posting recently. The reason is almost silly: I got to thinking about the suggestion in The Power Performance Grid that each person should identify a single measure most important to their success, and recognized that the number of blog posts certainly isn’t mine. (That may actually be a misinterpretation of the book’s message, but the damage is done.)

Plus, I’ve been busy with other things—in particular, a pilot QlikTech implementation at a Very Large Company that shall remain nameless. Results have been astonishing—we were able to deliver a cross sell analysis in hours that the client had been working on for years using conventional business intelligence technology. A client analyst, with no training beyond a written tutorial, was then able to extend that analysis with new reports, data views and drill-downs in an afternoon. Of course, it helped that the source data itself was already available, but QlikTech still removes a huge amount of effort from the delivery part of the process.

The IT world hasn’t quite recognized how revolutionary QlikTech is, but it’s starting to see the light: Gartner has begun covering them and there was a recent piece in InformationWeek. I’ll brag a bit and point out that my own coverage began much sooner: see my DM News review of July 2005 (written before we became resellers).

It will be interesting to watch the QlikTech story play out. There’s a theory that the big system integration consultancies won’t adopt QlikTech because it is too efficient: since projects that would have involved hundreds of billable hours can be completed in a day or two, the integrators won’t want to give up all that revenue. But I disagree for a couple of reasons: first of all, competitors (including internal IT) will start using QlikTech and the big firms will have to do the same to compete. Second, there is such a huge backlog of unmet needs for reporting systems that companies will still buy hundreds of hours of time; they’ll just get a lot more done for their money. Third, QlikTech will drive demand for technically-demanding data integration project to feed it information, and distribution infrastructures to use the results. These will still be big revenue generators for the integrators. So while the big integrators first reaction may be that QlikTech is a threat to their revenue, I’m pretty confident they’ll eventually see it gives them a way to deliver greater value to their clients and thus ultimately maintain or increase business volume.

I might post again tomorrow, but then I’ll be on vacation for two weeks. Enjoy the rest of the summer.

Wednesday, July 11, 2007

More Attacks on Net Promoter Score

It seems to be open season on Fred Reichheld. For many years, his concept of Net Promoter Score as a critical predictor of business success has been questioned by marketers. The Internets are now buzzing with a recent academic study “A Longitudinal Examination of Net Promoter and Firm Revenue Growth” (Timothy L. Keiningham, Bruce Cooil, Tor Wallin Andreassen, & Lerzan Aksoy, Journal of Marketing, July 2007) that duplicated Reichheld’s research but “fails to replicate his assertions regarding the ‘clear superiority’ of Net Promoter compared with other measures in those industries.” See, for example, comments by Adelino de Almeida, Alan Mitchell, and Walter Carl. I didn’t see an immediate rebuttal on Reichheld’s own blog, although the blog does contain responses to other criticisms.

There’s a significant contrast between the Net Promoter approach – focusing on a single outcome measure – and the Balanced Scorecard approach of viewing multiple predictive metrics. I think the Balanced Scorecard approach, particularly if cascaded down to individuals see the strategic measures they can directly affect, makes a lot more sense.

Tuesday, July 10, 2007

The Performance Power Grid Doesn't Impress

Every so often, someone offers to send me a review copy of a new business book. Usually I don’t accept, but given my current interest in performance management techniques, a headline touting “Six Reasons the Performance Power Grid Trumps the Balanced Scorecard” was intriguing. After all, Balanced Scorecard is the dominant approach to performance management today—something that becomes clear when you read other books on the topic and find that most have adopted its framework (with or without acknowledgement). So it seemed worth looking at something that claims to supersede it.

I therefore asked for a copy of The Performance Power Grid by David F. Giannetto and Anthony Zecca (John Wiley & Sons, 2006), and promised to mention it in this blog.

The book has its merits: it’s short and the type is big. On the other hand, there are no pictures and very few illustrations.

As to content: I didn’t exactly disagree with it, but nor did I find it particularly enlightening. The authors’ fundamental point is that organizations should build reporting systems that focus workers at each level on the tasks that are most important for business success. Well, okay. Balanced Scorecard says the same thing—the authors seem to have misinterpreted Balanced Scorecard to be about non-strategic metrics, and then criticize it based on that misinterpretation. The Performance Power Grid does seem to focus a bit more on immediate feedback to lower-level workers than Balanced Scorecard, but a fully-developed Balanced Scorecard system definitely includes “cascading” scorecards that reach all workers.

What I really found frustrating about the book was a lack of concrete information on exactly what goes into its desired system. Somehow you pick your “power drivers” to populate a “performance portal” on your “power grid” on a “(there’s a lot of “power” going on here), and provide analytics so workers can see why things are happening and how they can change them. But exactly what this portal looks like and which data are presented for analysis, isn’t explained in any detail.

The authors might argue that the specifics are unique to each company. But even so, a few extended examples and some general guidelines would be most helpful. The book does actually abound in examples, but most are either historical analogies (Battle of Gettysburg, Apollo 13) or extremely simplistic (a package delivery company focusing on timely package delivery). Then, just when you think maybe the point is each worker should focus on one or two things, the authors casually mention “10 to 15 metrics for each employee that they themselves can affect and are responsible for.” That’s a lot of metrics. I sure would have liked to see a sample list.

On the other hand, the authors are consultants who say their process has been used with great success. My guess is this has less to do with the particular approach than that any method will work if it leads companies to focus relentlessly on key business drivers. It never hurts to repeat that lesson, although I wouldn’t claim it’s a new one.

Monday, July 09, 2007

APQC Provides 3 LTV Case Studies

One of the common criticisms of lifetime value is that it has no practical applications. You and I know this is false, but some people still need convincing. The APQC formerly American Productivity and Quality Council) recently published “Insights into Using Customer Valuation Strategies to Drive Growth and Increase Profits from Aon Risk Services, Sprint Nextel, and a Leading Brokerage Services Firm,” which provides three mini-case histories that may help.

Aon created profitability scorecards for 10,000 insurance customers. The key findings were variations in customer service costs, which had a major impact on profitability. The cost estimates were based on surveys of customer-facing personnel. Results were used for planning, pricing, and to change how clients were serviced, and have yielded substantial financial gains.

Sprint Nextel developed a lifetime value model for 45 million wireless customers, classified by segments and services and using “a combination of historical costs, costing assumptions, cost tracing techniques, and activity-based allocations”. The model is used to assess the financial impact of proposed marketing programs and for strategic planning.

The brokerage firm also built a lifetime value model for customer segments, which were defined by trading behaviors, asset levels, portfolio mix and demographics. Value is determined by the products and services used by each segment, and in particular by the costs associated with different service channels. The LTV model is used to evaluate the three-year impact of marketing decisions such as pricing and advertising.

The paper also identifies critical success factors at each company: senior management support, organizational buy-in and profitability analysis technology at Aon; model buy-in at Sprint Nextel; and the model, profitability analysis and customer data at the brokerage firm.

My own take is that this paper reinforces the point that lifetime value is useful only when looking at individual customers or customer segments: a single lifetime value figure for all customers is of little utility. It also reinforces the need to model that incremental impact of different marketing programs, or of any change in the customer experience. Although the Aon and brokerage models are not described in detail, it appears they take expected customer behaviors as inputs and then calculate the financial impact. This is less demanding than having a model forecast the behavior changes themselves. Since it clearly delivers considerable value on its own, it’s a good first step in a larger project towards a comprehensive lifetime value-based management approach.

Friday, July 06, 2007

Sources of Benchmark Studies

Somehow I found myself researching benchmarking vendors this morning. Usually I think of the APQC, formerly American Productivity and Quality Center, as the source of such studies. They do seem to be the leader and their Web site provides lots of information on the topic.

But a few other names came up too. (I’ve excluded some specialists in particular fields such as customer service or health care.):

Kaiser Associates
Reset Group (New Zealand)
Resource Services Inc.
Best Practices LLC
MarketingSherpa
MarketingProfs
Cornerstone (banking)


Some of these simply do Web surveys. I wouldn’t trust those without closely examining the technique because it’s too easy for people to give inaccurate replies. Others do more traditional in-depth studies. The studies may be within a single organization, among firms in a single industry, or across industries.

Thursday, July 05, 2007

Is Marketing ROI Important?

You may have noticed that my discussions of marketing performance measurement have not stressed Return on Marketing Investment as an important metric. Frankly, this surprises even me: ROMI appears every time I jot down a list of such measures, but it never quite fits into the final schemes. To use the categories I proposed yesterday, ROMI isn’t a measure of business value, of strategic alignment, or of marketing efficiency. I guess it comes closest to the efficiency category, but the efficiency measures tend to be more simple and specific, such as a cost per unit or time per activity. Although ROMI could be considered the ultimate measure of marketing efficiency, it is too abstract to fit easily into this group.

Still, my silence doesn’t mean I haven’t been giving ROMI much thought. (I am, after all, a man of many secrets.) In fact, I spent some time earlier this week revisiting what I assume is the standard work on the topic, James Lenskold’s excellent Marketing ROI. Lenksold takes a rigorous and honest view of the subject, which means he discusses the challenges as well as the advantages. I came away feeling ROMI faces two major issues: the practical one of identifying exactly which results are caused by a particular marketing investment, and the more conceptual one of how to deal with benefits that depend in part on future marketing activities.

The practical issue of linking results to investments has no simple solution: there’s no getting around the fact that life is complex. But any measure of marketing performance faces the same challenge, so I don’t see this as a flaw in ROMI itself. The only thing I would say is that ROMI may give a false illusion of precision that persists no matter how many caveats are presented along with the numbers.

How to treat future, contingent benefits is also a problem any methodology must face. Lenskold offers several options, from treating several investments into a single investment for analytical purposes, to reporting the future benefits separately from the immediate ROMI, to treating investments with long-term results (e.g. brand building) as overhead rather than marketing. Since he covers pretty much all the possibilities, one of them must be the right answer (or, more likely, different answers will be right in different situations). My own attitude is this isn’t something to agonize over: all marketing decisions (indeed, all business decisions) require assumptions about the future, so it’s not necessary to isolate future marketing programs as something to treat separate from, say, future product costs. Both will result in part from future business decisions. When I calculate lifetime value, I certainly include the results of future marketing efforts in the value stream. Were I to calculate ROMI, I’d do the same.

So here's what it comes down to. Even though I'm attracted to the idea of ROMI, I find it isn't concrete enough to replace specific marketing efficiency measures like cost per order, but is still too narrow to provide the strategic insight gained from lifetime value. (This applies unless you define ROMI to include the results of future marketing decisions, but then it's really the same as incremental LTV.)

Now you know why ROMI never makes my list of marketing performance measures.

Tuesday, July 03, 2007

Marketing Performance: Plan, Simulate, Measure

Let’s dig a bit deeper into the relationships I mentioned yesterday among systems for marketing performance measurement, marketing planning, and marketing simulation (e.g., marketing mix models, lifetime value models). You can think of marketing performance measures as falling into three broad categories:

- measures that show how marketing investments impact business value, such as profits or stock price

- measures that show how marketing investments align with business strategy

- measures that show how efficiently marketing is doing its job (both in terms of internal operations and of cost per unit – impression, response, revenue, etc.)

We can put aside the middle category, which is really a special case related to Balanced Scorecard concepts. Measures in this are traditional Balanced Scorecard measures of business results and performance drivers. By design, the Balanced Scorecard focuses on just a few of these measures, so it is not concerned with the details captured in the marketing planning system. (Balanced Scorecard proponents recognize the importance of such plans; they just want to manage them elsewhere). Also, as I’ve previously commented, Balanced Scorecard systems don’t attempt to precisely correlate performance drivers to results, even though they do use strategy maps to identify general causal relationships between them. So Balanced Scorecard systems also don’t need marketing simulation systems, which do attempt to define those correlations.

This leaves the high-level measures of business value and the low-level measures of efficiency. Clearly the low-level measures rely on detailed plans, since you can only measure efficiency by looking at performance of individual projects and then the project mix. (For example: measuring cost per order makes no sense unless you specify the product, channel, offer and other specifics. Only then can you determine whether results for a particular campaign were too high or too low, by comparing them with similar campaigns.)

But it turns out that even the high-level measures need to work from detailed plans. The problem here is that aggregate measures of marketing activity are too broad to correlate meaningfully with aggregate business results. Different marketing activities affect different customer segments, different business measures (revenue, margins, service costs, satisfaction, attrition), and different time periods (some have immediate effects, others are long-term investments). Past marketing investments also affect current period results. So a simple correlation of this period marketing costs vs. this period business results makes no sense. Instead, you need to look at the details of specific marketing efforts, past and present, to estimate how they each contribute to current business results. (And you need to be reasonably humble in recognizing that you’ll never really account for results precisely—which is why marketing mix models start with a base level of revenue that would occur even if you did nothing.) The logical place to capture those detailed marketing effort is the marketing planning system.

The role of simulation systems in high-level performance reporting is to convert these detailed marketing plans into estimates of business impact from each program. The program results can then be aggregated to show the impact of marketing as a whole.

Of course, if the simulation system is really evaluating individual projects, it can also provide measures for the low-level marketing efficiency reports. In fact, having those sorts of measures is the only way the low-level system can get beyond comparing programs only against other similar programs, to allow comparisons across different program types. This is absolutely essential if marketers are going to shift resources from low- to high-yield activities and therefore make sure they are optimizing return on the marketing budget as a whole. (Concretely: if I want to compare direct mail to email, then looking at response rate won’t do. But if I add a simulation system that calculates the lifetime value acquired from investments in both, I can decide which one to choose.)

So it turns out that planning and simulation systems are both necessary for both high-level and low-level marketing performance measurement. The obvious corollary is that the planning system must capture the data needed for the simulation system to work. This would include tags to identify the segments, time periods and outcomes the each program is intended to affect. Some of these will be part of the planning system already, but other items will be introduced only to make simulation work.

Monday, July 02, 2007

Marketing Planning and Marketing Measurement: Surprisingly Separate

As part of my continuing research into marketing performance measurement, I’ve been looking at software vendors who provide marketing planning systems. I haven’t found any products that do marketing planning by itself. Instead, the function is part of larger systems. In order of increasing scope, these fall into three groups:

Marketing resource management:
- Aprimo
- MarketingPilot
- Assetlink
- Orbis Australian; active throughout Asia; just opened London office)
- MarketingCentral
- Xeed (Dutch; active throughout Europe)

Enterprise marketing:
- Unica
- SAS
- Teradata
- Alterian

Enterprise management:
- SAP
- Oracle/Siebel
- Infor

Few companies would buy an enterprise marketing or enterprise management system solely for its marketing planning module. Even marketing resource management software is primarily bought for other functions (mostly content management and program management). This makes sense in that most marketing planning comes down to aggregating information about the marketing programs that reside in these larger systems.

Such aggregations include comparisons across time periods, of budgets against actuals, and of different products and regions against each other. These are great for running marketing operations but don’t address larger strategic issues such as impact of marketing on customer attitudes or company value. Illustrating this connection requires analytical input from tools such as marketing mix models or business simulations. This is provided by measurement products like Upper Quadrant , Veridiem (now owned by SAS) and MMA Avista. Presumably we’ll see closer integ/ration between the two sets of products over time.

Friday, June 29, 2007

James Taylor on His New Book

A few months ago, James Taylor of Fair Isaac asked me to look over a proof of Smart (Enough) Systems, a book he has co-written with industry guru Neil Raden of Hired Brains. The topic, of course, is enterprise decision management, which the book explains in great detail. It has now been released (you can order through Amazon or James or Neil), so I asked James for a few comments to share.

What did you hope to accomplish with this book? Fame and fortune. Seriously, what I wanted to do was bring a whole bunch of threads and thoughts together in one place with enough space to develop ideas more fully. I have been writing about this topic a lot for several years and seen lots of great examples. The trouble is that a blog (www.edmblog.com) and articles only give you so much room – you tend to skim each topic. A book really let me and Neil delve deeper into the whys and hows of the topic. Hopefully the book will let people see how unnecessarily stupid their systems are and how a focus on the decisions within those systems can make them more useful.

What are the biggest obstacles to EDM and how can people overcome them?
- One is the belief that they need to develop “smart” systems and that this requires to-be-developed technology from the minds of researchers and science-fiction writers. Nothing could be further from the truth – the technology and approach to make systems be smart enough are well established and proven.

- Another is the failure to focus on decisions as critical aspects of their systems. Historically many decisions were taken manually or were not noticed at all. For instance, a call center manager might be put on the line to approve a fee refund for a good customer when the decision could have been taken by the system the call center representative was using without the need for a referral. That’s a unnecessarily manual decision. A hidden decision might be something like the options on an IVR system. Most companies make them the same for everyone yet once you know who is calling you could decide to give them a personalized set of options. Most companies don’t even notice this kind of decision and so take it poorly.

- Many companies have a hard time with “trusting” software and so like to have people make decisions. Yet the evidence is that the judicious use of automation for decisions can free up people to make the kinds of decisions they are really good at and let machines take the rest.

- Companies have become convinced that business intelligence means BI software and so they don’t think about using that data to make predictions of the future or the use of those predictions to improve production systems. This is changing slowly as people realize how little value they are getting out of looking backwards with their data instead of looking forwards.

Can EDM be deployed piecemeal (individual decisions) or does it need some overarching framework to understand each decision's long-term impact?
It can and should be deployed piecemeal. Like any approach it becomes easier once a framework is in place and part of an organizations standard methodology but local success with the automation and management of an individual decision is both possible and recommended for getting started.

The more of the basic building blocks of a modern enterprise architecture you have the better. Automated decisions are easier to embed if you are adopting SOA/BPM, easier to monitor if you have BI/Performance Management working and more accurate if your data is integrated and managed. None of these are pre-requisites for initial success though.

The book is very long. What did you leave out? Well, I think it is a perfect length! What we left out were detailed how-tos on the technology and a formal methodology/project plans for individual activities. The book pulls together various themes and technologies and shows how they work together but it does not replace the kind of detail you would get in a book on business rules or analytics nor does it replace the need for analytic and systems development methods be they agile or Unified Process or CRISP-DM.

Tuesday, June 26, 2007

Free Data as in Free Beer

I found myself wandering the aisles at the American Library Association national conference over the weekend. Plenty of publishers, library management systems and book shelf builders, none of which are particularly relevant to this blog (although there was at least one “loyalty” system for library patrons). There was some search technology but nothing particularly noteworthy.

The only exhibitor that did catch my eye was Data-Planet, which aggregates data on many topics (think census, economic time series, stocks, weather, etc.) and makes it accessible over the Web through a convenient point-and-click interface. The demo system was incredibly fast for Web access, although I don’t know whether the show set-up was typical. The underlying database is nothing special (SQL Server), but apparently the tables have been formatted for quick and easy access.

None of this would have really impressed me until I heard the price: $495 per user per year. (Also available: a 30 day free trial and $49.95 month-to-month subscription). Let me make clear that we’re talking about LOTS of data: “hundreds of public and price industry sources” as the company brochure puts it. Knowing how much people often pay for much smaller data sets, this strikes me as one of those bargains that are too good to pass up even if you don’t know what you’ll do with it.

As I was pondering this, I recalled a post by Adelino de Almeida about some free data aggregation sites, Swivel and Data360 . This made me a bit sad: I was pretty enthused about Data-Planet but don’t see how they can survive when others are giving away similar data for free. I’ve only played briefly with Swivel and Data360 but suspect they aren’t quite as powerful as Data-Planet, so perhaps there is room for both free and paid services.

Incidentally, Adelino has been posting recently about lifetime value. He takes a different approach to the topic than I do.

Wednesday, June 20, 2007

Using Lifetime Value to Measure the Value of Data Quality

As readers of this blog are aware, I’ve reluctantly backed away from arguing that lifetime value should be the central metric for business management. I still think it should, but haven’t found managers ready to agree.

But even if LTV isn’t the primary metric, it can still provide a powerful analytical tool. Consider, for example, data quality. One of the challenges facing a data quality initiative is how to justify the expense. Lifetime value provides a framework for doing just that.

The method is pretty straightforward: break lifetime value in its components and quantify the impact of a proposed change on whichever components will be affected. Roll this up to business value, and there you have it.

Specifically, such a breakdown would look like this:

Business value = sum of future cash flows = number of customers x lifetime value per customer

Number of customers would be further broken down into segments, with the number of customers in each segment. Many companies have a standard segmentation scheme that would apply to all analyses of this sort. Others would create custom segmentations depending on the nature of the project. Where a specific initiative such as data quality is concerned, it would make sense to isolate the customer segments affected by the initiative and just focus on them. (This may seem self-evident, but it’s easy for people to ignore the fact that only some customers will be affected, and apply estimated benefits to everybody. This gives nice big numbers but is often quite unrealistic.)

Lifetime value per customer can be calculated many ways, but a pretty common approach is to break it into three major factors:

- acquisition value, further divided into the marketing cost of acquiring a new customer, the revenue from that initial purchase, and the fulfillment costs (product, service, etc.) related to that purchase. All these values are calculated separately for each customer segment.

- future value, which is the number of active years per customers times the value per year. Years per customer can be derived from a retention rate or a more advanced approach such as a survivor curve (showing number of customers remaining at the end of each year). Value per year can be broken into the number of orders per year times the value per order , or the average mix of products times the value per product). Value per order or product can itself be broken into revenue, marketing cost and fulfillment cost.

Laid out more formally, this comes to nine key factors:

- number of customers

- acquisition marketing cost per customer
- acquisition revenue per customer
- acquisition fulfillment cost per customer

- number of years per customer
- orders per year
- revenue per order
- marketing cost per order
- fulfillment cost per order

This approach may seem a little too customer-centric: after all, many data quality initiatives relate to things like manufacturing and internal business processes (e.g., payroll processing). Well, as my grandmother would have said, feh! (Rhymes with ‘heh’, in case you’re wondering, and signifies disdain.) First of all, you can never be too customer-centric, and shame on you for even thinking otherwise. Second of all, if you need it: every business process ultimately affects a customer, even if all it does is impact overhead costs (which affect prices and profit margins). Such items are embedded in the revenue and fulfillment cost figures above.

I could easily list examples of data quality changes that would affect each of the nine factors, but, like the margin of Fermat’s book, this blog post is too small to contain them. What I will say is that many benefits come from being able to do more precise segmentation, which will impact revenue, marketing costs, and numbers of customers, years, and orders per customer. Other benefits, impacting primarily fulfillment costs (using my broad definition), will involve more efficient back-office processes such as manufacturing, service and administration.

One additional point worth noting is many of the benefits will be discontinuous. That is, data that's currently useless because of poor quality or total absence does not become slightly useful because it becomes slightly better or partially available. A major change like targeted offers based on demographics can only be justified if accurate demographic data is available for a large portion of the customer base. The value of the data therefore remains at zero until a sufficient volume is obtained: then, it suddenly jumps to something significant. Of course, there are other cases, such as avoidance of rework or duplicate mailings, where each incremental improvement in quality does bring a small but immediate reduction in cost.

Once the business value of a particular data quality effort has been calculated, it’s easy to prepare a traditional return on investment calculation. All you need to add is the cost of improvement itself.

Naturally, the real challenge here is estimating the impact of a particular improvement. There’s no shortcut to make this easy: you simply have to work through the specifics of each case. But having a standard set of factors makes it easier to identify the possible benefits and to compare alternative projects. Perhaps more important, the framework makes it easy to show how improvements will affect conventional financial measurements. These will often make sense to managers who are unfamiliar with the details of the data and processes involved. Finally, the framework and related financial measurements provide benchmarks that can later be compared with actual results to show whether the expected benefits were realized. Although such accountability can be somewhat frightening, proof of success will ultimately build credibility. This, in turn, will help future projects gain easier approval.

Tuesday, June 19, 2007

Unica Paper Gives Marketing Measurement Tips

If the wisdom of Plato can’t solve our marketing measurement problems, perhaps we can look to industry veteran Fred Chapman, currently with enterprise marketing software developer Unica. Fred recently gave a Webinar on Marketing Effectively on Your Terms and Your Time which did an excellent job laying out issues and solutions for today’s marketers. Follow-up materials included a white paper Building a Performance Measurement Culture in Marketing laying out ten steps toward improved marketing measurement.

The advice in the paper is reasonable, if fairly conventional: ensure sponsorship, articulate goals, identify important metrics, and so on. The paper also stresses the importance of having an underlying enterprise marketing system like, say, the one sold by Unica.

This is useful, so far as it goes. But it doesn't help with the real challenge of measurement projects, which is choosing metrics that support corporate strategies. So far, I haven’t come across a specific methodology for doing this. Most gurus seem to assume a flash of enlightenment will show each organization its own path. Perhaps organizations and strategies are all too different for any methodology to be more specific.

Relying on such insights suggests we have veered from Western rationalism to Eastern mysticism. I haven't yet seen a book "The Zen of Marketing Performance Measurement", but perhaps that's where we're headed.

Monday, June 18, 2007

Plato's View of Marketing Performance Measurement

I reread Plato’s Protagoras over the weekend for a change of pace. What makes that relevant here is Socrates’ contention that virtue is the ability to measure accurately—in particular, the ability to measure the amount of good or evil produced by an activity. Socrates’ logic is that people always seek the greatest amount of good (which he equates with pleasure), so different choices simply result from different judgments about which action will produce the most good.

I don’t find this argument terribly convincing, for reasons I’ll get to shortly. But it certainly resembles the case I’ve made here about the importance of measuring lifetime value as a way to make good business decisions. So, to a certain degree, I share Socrates' apparent frustration that so many people fail to accept the logic of this position—that they should devote themselves to learning to measure the consequences of their decisions.

Of course, the flaw in both Plato’s and my own vision is that people are not purely rational. I’ll leave the philosophical consequences to others, but the implication for business management is you can’t expect people to make decisions solely on the basis of lifetime value: they have too many other, non-rational factors to take into consideration.

It was none other than Protagoras who said “Man is the measure of all things”—and I think it’s fair to assume he would be unlikely to accept the Platonic ideal of marketing measurement, which makes lifetime value the measure of all things instead.

Friday, June 15, 2007

Accenture Paper Offers Simplified CRM Planning Approach

As I’ve pointed out many times before, consultants love their 2x2 matrices. Our friends at Accenture have once again illustrated the point with a paper “Surveying and Building Your CRM Future,” whose subtitle promises “a New CRM Software Decision-Making Model”.

Yep, the model is a matrix, dividing users into four categories based on data “density” (volume and update frequency) and business process uniqueness (need for customization). Each combination neatly maps to a different class of CRM software. Specifically:

- High density / low uniqueness is suited to enterprise packages like SAP and Oracle, since there’s a lot of highly integrated data but not too much customization required

- Low density / low uniqueness is suited to Software as a Service (SaaS) products like Salesforce.com since data and customization needs are minimal

- High density / high uniqueness is suited to “composite CRM” suites like Siebel (it’s not clear whether Accenture thinks any other products exist in this group)

- Low density / high uniqueness is suited to specialized “niche” vendors like marketing automation, pricing or analytics systems

In general these are reasonable dimensions, reasonable software classifications and a reasonable mapping of software to user needs. (Of course, some vendors might disagree.) Boundaries in the real world are not quite so distinct, but let's assume that Accenture has knowingly oversimplified for presentation purposes.

A couple of things still bother me. One is the notion that there’s something new here—the paper argues the “old” decision making model was simply based on comparing functions to business requirements, as if this were no longer necessary. Although it’s true that there is something like functional parity in the enterprise and, perhaps, “composite CRM" categories, there are still many significant differences among the SaaS and niche products. More important, business requirements different greatly among companies, and are far from encapsulated by two simple dimensions.

A cynic would point out that companies like Accenture pick one or two tools in each category and have no interest in considering alternatives that might be better suited for a particular client. But am I a cynic?

My other objection is that even though the paper mentions Service Oriented Architectures (SOA) several times, it doesn’t really come to grips with the implications. It relegates SOA to the high density / high latency quadrant: “Essentially, a composite CRM solution is a solution that enables organizations to move toward SOAs.” Then it argues that enterprise packages themselves are migrating in the composite CRM direction. This is rather confusing but seems to imply the two categories will merge.

I think what’s missing here is an acknowledgement that real companies will always have a mix of systems. No firm runs purely on SAP or Oracle enteprise software. Large firms have multiple CRM implementations. Thus there will always be a need to integrate different solutions, regardless of where a company falls on the density and uniqueness dimensions. SOA offers great promise as a way to accomplish this integration. This means it is as likely to break apart the enterprise packages as to become the glue that holds them together.

In short, this paper presents some potentially helpful insights. But there’s still no shortcut around the real work of requirements analysis, vendor evaluation and business planning.

Thursday, June 14, 2007

Hosted Software Enters the Down Side of the Hype Cycle

SMB SaaS sales robust, but holdouts remain” reads the headline on a piece from SearchSMB.com Website. (For the acronym impaired, SMB is “small and medium sized business” and SaaS is “software as a service”, a.k.a. hosted systems.) The article quotes two recent surveys, one by Saugatuck Technology and the other by Gartner. According to the article, Saugatuck found “SMB adoption rose from 9% in 2006 to 27% in 2007” among businesses under $1 billion in revenue, while Gartner reported “Only 7% of SMBs strongly believed that SaaS was suitable for their organizations, and only 17% said they would consider SaaS when its adoption became more widespread.”

These seem to be conflicting findings, although it’s impossible to know for certain without looking at the actual surveys and their audiences. But the very appearance of the piece suggests some of the bloom is off the SaaS rose. This is a normal stage in the hype cycle and frankly I’ve been anticipating it for some time. The more interesting question is why SMBs would be reluctant to adopt SaaS.

The article quotes Gartner Vice President and Research Director James Browning as blaming the fact that “SMBs are control freaks” and therefore less willing to trust their data to an outsider than larger, presumably more sophisticated entities. Maybe—although I’ve seen plenty of control freaks at big companies too. The article also mentions difficulties with customization and integration. Again, I suspect that’s a contributing factor but probably not the main one.

A more convincing insight came from an actual SMB manager, who pointed to quality of service issues and higher costs than in-house systems. I personally suspect the cost issue is the real one: whether or not they’re control freaks, SMBs are definitely penny-pinchers. That’s what happens when it’s your own money. (I say this as someone who’s run my own Very Small Business for many years.) On a more detailed financial level, SMBs have less formal capital appropriation processes than big companies, so their managers have less incentive to avoid the capital expense by purchasing SaaS products through their operating budgets.

One point the article doesn’t mention is that SaaS prices have gone up considerably, at least among the major vendors. This shifts the economics in favor of in-house systems, particularly since many SMBs can use low cost products that larger companies would not accept. This pricing shift makes sense from the vendors’ standpoint: as SaaS is accepted at larger companies with deeper pockets, it makes sense to raise prices to match. Small businesses may need to look beyond the market leaders to find pricing they can afford.

Wednesday, June 13, 2007

Autonomy Ultraseek Argues There's More to Search Than You-Know-Who

In case I didn’t make myself clear yesterday, my conclusion about balanced scorecard software is that the systems themselves are not very interesting, even though the concept itself can be extremely valuable. There’s nothing wrong with that: payroll software also isn’t very interesting, but people care deeply that it works correctly. In the case of balanced scorecards, you just need something to display the data—fancy dashboard-style interfaces are possible but not really the point. Nor is there much mystery about the underlying technology. All the value and all the art lie elsewhere: in picking the right measures and making sure managers pay attention to what the scorecards are telling them.

I only bring this up to explain why I won’t be writing much about balanced scorecard systems. In a word (and with all due respect, and stressing again that the application is important), I find them boring.

Contrast this with text search systems. These, I find fascinating. The technology is delightfully complicated and subtle differences among systems can have big implications for how well they serve particular purposes. Plus, as I mentioned a little while ago, there is some interesting convergence going on between search technology and data integration systems.

One challenge facing search vendors today is the dominance of Google. I hadn’t really given this much thought, but reading the white paper “Business Search vs. Consumer Search” (registration required) from Autonomy’s Ultraseek product group, it became clear that they are finding Google to be major competition. The paper doesn’t mention Google by name, but everything from the title on down is focused on explaining why there are “fundamental differences between searching for information on the Internet and finding the right document quickly inside your corporate intranets, public websites and partner extranets.”

The paper states Ultraseek’s case well. It mentions five specific differences between “consumer” search on the Web and business search:

- business users have different, known roles which can be used to tune results
- business users can employ category drill-down, metadata, and other alternatives to keyword searches
- business searches must span multiple repositories, not just Web pages
- business repositories are in many different formats and languages
- business searches are constrained by security and different user authorities

Ultraseek overstates its case in a few areas. Consumer search can use more than just keywords, and in fact can employ quite a few of the text analysis methods that Ultraseek mentions as business-specific. Consumer search is also working on moving beyond Web pages to different repositories, formats and languages. But known user roles and security issues are certainly more relevant to business than consumer search engines. And, although Ultraseek doesn’t mention it, Web search engines don't generally support some other features, like letting content owners tweak results to highlight particular items, that may matter in a business context.

But, over all, the point is well taken: there really is a lot more to search than Google. People need to take the time to find the right tool for the job at hand.

Tuesday, June 12, 2007

Looking for Balanced Scorecard Software

I haven’t been able to come up with an authoritative list of major balanced scorecard software vendors. UK-based consultancy 2GC lists more than 100 in a helpful database with little blurbs on each, but they include performance management systems that are not necessarily for balanced scorecards. The Balanced Scorecard Collaborative, home of balanced scorecard co-inventor David P. Norton, lists two dozen products they have certified as meeting true balanced scorecard criteria. Of these, more than half belong non-specialist companies including enterprise software (Oracle, Peoplesoft [now Oracle], SAP, Infor, Rocket Software) and broad business intelligence systems (Business Objects, Cognos, Hyperion [now Oracle], Information Builders, Pilot Software [now SAP], SAS). Most of these firms have purchased specialist products. The remaining vendors (Active Strategy, Bitam, Consist FlexSI, Corporater, CorVu, InPhase, Intalev, PerformanceSoft [now Actuate], Procos, Prodacapo, QPR and Vision Grupos Consultores) are a combination of performance management specialists and regional consultancies.

That certified products are available from all the major enterprise and business intelligence vendors shows the basic functions needed for balanced scorecard are well understood and widely available. I’m sure there are differences among the products but suspect their choice of system will rarely be critical to project success or failure. The core functions are creation of strategy maps and cascading scorecards. I suspect systems vary more widely in their ability to import and transform scorecard data. A number of products also include project management functions such as task lists and milestone reporting. This is probably outside of the core requirements for balanced scorecard but does make sense in the larger context of providing tools to help meet business goals.

If your idea of a good time is playing with this sort of system (and whose isn’t?), Strategy Map offers a fully functional personal version for free.

Monday, June 11, 2007

Why Balanced Scorecards Haven't Succeeded at Marketing Measurement

All this thinking about the overwhelming number of business metrics has naturally led me consider balanced scorecards as a way to organize metrics effectively. I think it’s fair to say that balanced scorecards have had only modest success in the business world: the concept is widely understood, but far from universally employed.

Balanced scorecards make an immense amount of sense. A disciplined scorecard process begins with strategy definition followed by a strategy map, which identifies the measures most important to a business and how they are relate to each other and final results. Once the top-level scorecard is built, subsidiary scorecards report on components that contribute to the top-level measures, providing more focused information and targets for lower-level managers.

That’s all great. But my problem with scorecards, and I suspect the reason they haven’t been used more widely, is they don’t make a quantifiable link between scorecard measures and business results. Yes, something like on-time arrivals may be a critical success factor for an airline, and thus appear on its scorecard. That scorecard will even give a target value to compare with actual performance. But it won’t show the financial impact of missing the target—for example, every 1% shortfall vs. the target on-time arrival rate translates into $10 million in lost future value. Proponents would argue (a) this value is impossible to calculate because there are so many intervening factors and (b) so long as managers are rewarded for meeting targets (or punished for not meeting them), that’s incentive enough. But I believe senior managers are rightfully uncomfortable setting those sorts of targets and reward systems unless the relationships between the targets and financial results are known. Otherwise, they risk disproportionately rewarding the selected behaviors, thereby distorting management priorities and ultimately harming business results.

Loyal readers of this blog might expect me to propose lifetime value as a better alternative. It probably is, but the lukewarm response it elicits from most managers has left me cautious. Whether managers don’t trust LTV calculations because they’re too speculative, or (more likely) are simply focused on short-term results, it’s pretty clear that LTV will not be the primary measurement tool in most organizations. I haven’t quite given up hope that LTV will ultimately receive its due, but for now feel it makes more sense to work with other measures that managers find more compelling.

Friday, June 08, 2007

So Many Measures, So Little Time

I’ve been collating lists of marketing performance metrics from different sources, which is exactly as much fun as it sounds. One result that struck me was how little overlap I found: on two big lists of just over 100 metrics each, there were only 24 in common. These were fundamental concepts like market share, customer lifetime value, gross rating points, and clickthrough rate. Oddly enough, some metrics that I consider very basic were totally absent, such as number of campaigns and average campaign size. (These are used to measure staff productivity and degree of targeting.) I think the lesson here is that there is an infinite number of possible metrics, and what’s important is finding or inventing the right ones for each situation. A related lesson is that there is no agreed-upon standard set of metrics to start from.

I also found I could divide the metrics into three fundamental groups. Two were pretty much expected: corporate metrics related to financial results, customers and market position (i.e., brand value); and execution metrics related to advertising, retail, salesforce, Internet, dealers, etc. The third group, which took me a while to recognize, was product metrics: development cost, customer needs, number of SKUs, repair cost, revenue per unit, and so on. Most discussions of the topic don’t treat product metrics as a distinct category, but it’s clearly different from the other two. Of course, many product attributes are not controlled by marketing, particularly in the short term. But it’s still important to know about them since they can have a major impact on marketing results.

Incidentally, this brings up another dimension that I’ve found missing in most discussions, which often classify metrics in a sequence of increasing sophistication, such as activity measures, results measures and leading indicators. Such schemes have no place for metrics based on external factors such as competitor behavior, customer needs, or economic conditions--even though such metrics are present in the metrics lists. Such items are by definition beyond the control of the marketers being measured, so in a sense it’s wrong to consider them as marketing performance metrics. But they definitely impact marketing results, so, like product attributes, they are needed as explanatory factors in any analysis.

Thursday, June 07, 2007

Ace Hardware Fits Ads to Customer Context

As you almost certainly didn’t notice, I didn’t make a blog post yesterday. For no logical reason, this makes me feel guilty. So, since I happened to just see an interesting article, I’ll make two today.

A piece in this week’s BrandWeek describes a promotion by Ace Hardware that will allow people who are tracking a hurricane to find a nearby hardware store ("Look Like Rain? Ace Hardware Hopes So", BrandWeek, June 6, 2007).

This is a great example of using customer context in marketing—one of the core tenets of the Customer Experience Matrix. It’s a particularly powerful because it uses context twice: first, in identifying customers who are likely to be located in a hurricane-prone area, and second, because it gives them information about their local hardware store. If they added a mobile-enabled feature that included realtime driving directions, I’d have to give them some sort of award.

eWeek: Semantic Web Shows Convergence of Search and Data Integration

This week’s eWeek has an unusually lucid article explaining the Semantic Web. The article presents the Semantic Web as a way to tag information in a structured way and make it searchable via the Web. I think this oversimplifies a bit by leaving out the importance of the relationships among the tags, which are part of the “semantic” framework and what makes the queries able to return non-trivial results. But no matter—the article gives a clear description of the end result (querying the Web like a database), and that’s quite helpful.

From my personal perspective, it was intriguing that the article also quoted Web creator Tim Berners-Lee as stating "The number one role of Semantic Web technologies is data integration across applications." This supports my previous contention that search applications and data integration (specifically, data matching) tools are starting to overlap. Of course, I was coming at it from the opposite direction, specifically suggesting that data matching technologies would help to improve searches of unstructured data. The article is suggesting that a search application (Semantic Web) would help to integrate structured data. But either way, some cross-pollination is happening today and a full convergence could follow.

Tuesday, June 05, 2007

A Small But Useful Thought

I’ve been continuing my research into marketing performance measurement. Nothing earth-shattering to report, but I did come across one idea worth sharing. I saw a couple of examples where a dashboard graph displayed two measures that represent trade-offs: say, inventory level vs. out-of-stock conditions, or call center time per call vs. call center cross-sell revenue.

Showing two compensatory metrics together at least ensures the implicit trade-off is visible. Results must still be related to ultimate business value to check whether the net change is positive or negative (e.g., is the additional cross-sell revenue worth more than additional call time?) But just showing the net value alone would hide the underlying changes in the business. So I think it’s more useful to show the measures themselves.

Sunday, June 03, 2007

Data Visualization Is Just One Part of a Dashboard System

Following Friday’s post on dashboard software, I want to emphasize that data visualization techniques are really just one element of those systems, and not necessarily the most important. Dashboard systems must gather data from source systems; transform and consolidate it; place it in structures suited for high-speed display and analysis; identify patterns, correlations and exceptions; and make it accessible to different users within the constraints of user interests, skills and authorizations. Although I haven’t researched the dashboard products in depth, even a cursory glance at their Web sites suggests they vary widely in these areas.

As with any kind of analytical system, most of the work and most of the value in dashboards will be in the data gathering. Poor visualization of good data can be overcome; good visualization of poor data is basically useless. So users should focus their attention on the underlying capabilities and not be distracted by display alone.

Friday, June 01, 2007

Dashboard Software: Finding More than Flash

I’ve been reading a lot about marketing performance metrics recently, which turns out to be a drier topic than I can easily tolerate—and I have a pretty high tolerance for dry. To give myself a bit of a break without moving too far afield, I decided to research marketing dashboard software. At least that let me look at some pretty pictures.

Sadly, the same problem that afflicts discussions of marketing metrics affects most dashboard systems: what they give you is a flood of disconnected information without any way to make sense of it. Most of the dashboard vendors stress their physical display capabilities—how many different types of displays they provide, how much data they can squeeze onto a page, how easily you can build things—and leave the rest to you. What this comes down to is: they let you make bigger, prettier mistakes faster.

Two exceptions did crop up that seem worth mentioning.

- ActiveStrategy builds scorecards that are specifically designed to link top-level business strategy with lower-level activities and results. They refer to this as “cascading” scorecards and that seems a good term to illustrate the relationship. I suppose this isn’t truly unique; I recollect the people at SAS showing me a similar hierarchy of key performance indicators, and there are probably other products with a cascading approach. Part of this may be the difference between dashboards and scorecards. Still, if nothing else, ActiveStrategy is doing a particularly good job of showing how to connect data with results.

- VisualAcuity doesn’t have the same strategic focus, but it does seek more effective alternatives to the normal dashboard display techniques. As their Web site puts it, “The ability to assimilate and make judgments about information quickly and efficiently is key to the definition of a dashboard. Dashboards aren’t intended for detailed analysis, or even great precision, but rather summary information, abbreviated in form and content, enough to highlight exceptions and initiate action.” VisualAcuity dashboards rely on many small displays and time-series graphs to do this.

Incidentally, if you’re just looking for something different, FYIVisual uses graphics rather than text or charts in a way that is probably very efficient at uncovering patterns and exceptions. It definitely doesn’t address the strategy issue and may or may not be more effective than more common display techniques. But at least it’s something new to look at.