I took a close look recently at Tableau data visualization software. I liked Tableau a lot, even though it wasn’t quite what I expected. I had thought of it as a way to build aesthetically-correct charts, according to the precepts set down by Edward Tufte and like-minded visualization gurus such as Stephen Few. But even though Tableau follows many of these principles, it is less for building charts than interactive data exploration.
This is admittedly a pretty subtle distinction, since the exploration is achieved through charts. What I mean is that Tableau is designed to make it very easy to see the results of changing one data element at a time, for example to find whether a particular variable helps to predict an outcome. (That’s a little vague: the example that Tableau uses is analyzing the price of a condominium, and adding variables like square footage, number of rooms, number of baths, location, etc. to see if they explain differences in the sales price.) What makes Tableau special is it automatically redraws the graphs as the data changes, often producing a totally different format, The formats are selected according to the fore-mentioned visualization theories, and for the most part are quite effective.
It may be worth diving a bit deeper into those visualization techniques, although I don’t claim to be an expert. You’ve probably heard some of the gurus’ common criticisms: ‘three dimensional’ bars that don’t mean anything; pie charts and gauges that look pretty but show little information given the space they take up; radar charts that are fundamentally incomprehensible. The underlying premise is that humans are extremely good at finding patterns in visual data, so that is what charts should be used for—not to display specific information, which belongs in tables of numbers. Building on this premise, research shows that people find patterns more easily in certain types of displays: shapes, graduations of color, and spatial relationships work well, but not reading numbers, making subtle size comparisons (e.g., slices in a pie chart), or looking up colors in a key. This approach also implies avoiding components that convey no information, such as the shadows on those ‘3-d’ bar charts, since these can only distract from pattern identification.
In general, these principles work well, although I have trouble with some of the rules that result. For example, grids within charts are largely forbidden, on the theory that charts should only show relative information (patterns) and you don’t need a grid to know whether one bar is higher than another. My problem with that one, in fact, it’s often difficult to compare two bars that are not immediately adjacent, and a grid can help. A grid can also provide a useful reference point, such as showing ‘freezing’ on a temperature chart. The gurus might well allow grid lines in some of those circumstances.
On the other hand, the point about color is very well taken. Americans and Europeans often use red for danger and green for good, but there is nothing intuitive about those—they depend on cultural norms. In China, red is a positive color. Worse, the gurus point out, a significant portion of the population is color-blind and can’t distinguish red from green anyway. They suggest that color intensity is a better way to show gradations, since people naturally understand a continuum from light to dark (even though it may not be clear which end of the scale is good or bad). They also suggest muted rather than bright colors, since it’s easier to see subtle patterns when there is less color contrast. In general, they recommend against using color to display meaning (say, to identify regions on a bar chart) because it takes conscious effort to interpret. Where different items must be shown on the same chart, they would argue that differences in shape are more easily understood.
As I say, Tableau is consistent with these principles, although it does let users make other choices if they insist. There is apparently some very neat technology inside Tableau that builds the charts using a specification language rather than conventional configuration parameters. But this is largely hidden from users, since the graphs are usually designed automatically. It may have some effect on how easily the system can switch from one format to another, and on the range of display options.
The technical feature that does impact Tableau users is its approach to data storage. Basically, it doesn’t have one: that is, it relies on external data stores to hold the information it requires, and issues queries against those sources as required. This was a bit of a disappointment to me, since it means Tableau’s performance really relies on the external systems. Not that that’s so terrible—you could argue (as Tableau does) that this avoids loading data into a proprietary format, making it easier to access the information you need without pre-planning. But it also means that Tableau can be painfully slow when you’re working with large data sets, particularly if they haven’t been optimized for the queries you’re making. In a system designed to encourage unplanned “speed of thought” data exploration, I consider this a significant drawback.
That said, let me repeat that I really liked Tableau. Query speed will be an issue in only some situations. Most of the time, Tableau will draw the required data into memory and work with it there, giving near-immediate response. And if you really need quick response from a very large database, technical staff can always apply the usual optimization techniques. For people with really high-end needs, Tableau already works with the Hyperion multidimensional database and is building an adapter for the Netezza high speed data appliance.
Of course, looking at Tableau led me to compare it with QlikTech. This is definitely apples to oranges: one is a reporting system and the other is a data exploration tool; one has its own database and the other doesn’t. I found that with a little tweaking I could get QlikView to produce many of the same charts as Tableau, although it was certainly more work to get there. I’d love to see the Tableau interface connected with the QlikView data engine, but suspect the peculiarities of both systems make this unlikely. (Tableau queries rely on advanced SQL features; QlikView is not a SQL database.) If I had to choose just one, I would pick the greater data access power and flexibility of QlikTech over the easy visualizations of Tableau. But Tableau is cheap enough—$999 to $1,799 for a single user license, depending on the data sources permitted—that I see no reason most people who need them couldn’t have both.
Sunday, September 16, 2007
Thursday, August 30, 2007
Marketing Performance Involves More than Ad Placement
I received a thoughtful e-mail the other day suggesting that my discussion of marketing performance measurement had been limited to advertising effectiveness, thereby ignoring the other important marketing functions of pricing, distribution and product development. For once, I’m not guilty as charged. At a minimum, a balanced scorecard would include measures related to those areas when they were highlighted as strategic. I’d further suggest that many standard marketing measures, such as margin analysis, cross-sell ratios, and retail coverage, address those areas directly.
Perhaps the problem is that so many marketing projects are embedded in advertising campaigns. For example, the way you test pricing strategies is to offer different prices in the marketplace and see how customers react. Same for product testing and cross-sales promotions. Even efforts to improve distribution are likely to boil down to campaigns to sign up new dealers, training existing ones, distribute point of sale materials, and so on. The results will nearly always be measured in terms of sales results, exactly as you measure advertising effectiveness.
In fact, since everything is measured through advertising it and recording the results, the real problem may be how to distinguish “advertising” from the other components of the marketing mix. In classic marketing mix statistical models, the advertising component is representing by ad spend, or some proxy such as gross rating points or market coverage. At a more tactical level, the question is the most cost-effective way to reach the target audience, independent of the message content (which includes price, product and perhaps distribution elements, in addition to classic positioning). So it does make sense to measure advertising effectiveness (or, more precisely, advertising placement effectiveness) as a distinct topic.
Of course, marketing does participate in activities that are not embodied directly in advertising or cannot be tested directly in the market. Early-stage product development is driven by market research, for example. Marketing performance measurement systems do need to indicate performance in these sorts of tasks. The challenge here isn’t finding measures—things like percentage of sales from new products and number of research studies completed (lagging and leading indicators, respectively) are easily available. Rather, the difficulty is isolating the contribution of “marketing” from the contribution of other departments that also participate in these projects. I’m not sure this has a solution or even needs one: maybe you just recognize that these are interdisciplinary teams and evaluate them as such. Ultimately we all work for the same company, eh? Now let’s sing Kumbaya.
In any event, I don’t see a problem using standard MPM techniques to measure more than advertising effectiveness. But it’s still worth considering the non-advertising elements explicitly to ensure they are not overlooked.
Perhaps the problem is that so many marketing projects are embedded in advertising campaigns. For example, the way you test pricing strategies is to offer different prices in the marketplace and see how customers react. Same for product testing and cross-sales promotions. Even efforts to improve distribution are likely to boil down to campaigns to sign up new dealers, training existing ones, distribute point of sale materials, and so on. The results will nearly always be measured in terms of sales results, exactly as you measure advertising effectiveness.
In fact, since everything is measured through advertising it and recording the results, the real problem may be how to distinguish “advertising” from the other components of the marketing mix. In classic marketing mix statistical models, the advertising component is representing by ad spend, or some proxy such as gross rating points or market coverage. At a more tactical level, the question is the most cost-effective way to reach the target audience, independent of the message content (which includes price, product and perhaps distribution elements, in addition to classic positioning). So it does make sense to measure advertising effectiveness (or, more precisely, advertising placement effectiveness) as a distinct topic.
Of course, marketing does participate in activities that are not embodied directly in advertising or cannot be tested directly in the market. Early-stage product development is driven by market research, for example. Marketing performance measurement systems do need to indicate performance in these sorts of tasks. The challenge here isn’t finding measures—things like percentage of sales from new products and number of research studies completed (lagging and leading indicators, respectively) are easily available. Rather, the difficulty is isolating the contribution of “marketing” from the contribution of other departments that also participate in these projects. I’m not sure this has a solution or even needs one: maybe you just recognize that these are interdisciplinary teams and evaluate them as such. Ultimately we all work for the same company, eh? Now let’s sing Kumbaya.
In any event, I don’t see a problem using standard MPM techniques to measure more than advertising effectiveness. But it’s still worth considering the non-advertising elements explicitly to ensure they are not overlooked.
Monday, August 06, 2007
What Makes QlikTech So Good: A Concrete Example
Continuing with Friday’s thought, it’s worth giving a concrete example of what QlikTech makes easy. Let’s look at the cross-sell report I mentioned on Thursday.
This report answers a common marketing question: which products do customers tend to purchase together, and how do customers who purchase particular combinations of products behave? (Ok, two questions.)
The report this begins with a set of transaction records coded with a Customer ID, Product ID, and Revenue. The trick is to identify all pairs among these records that have the same Customer ID. Physically, the resulting report is a matrix with products both as column and row headings. Each cell will report on customers who purchased the pair of products indicated by the row and column headers. Cell contents will be the number of customers, number of purchases of the product in the column header, and revenue of those purchases. (We also want row and column totals, but that’s a little complicated so let’s get back to that later.)
Since each record relates to the purchase of a single product, a simple cross tab of the input data won’t provide the information we want. Rather, we need to first identify all customers who purchased a particular product and group them on the same row. Columns will then report on all the other products they purchased.
Conceptually, Qlikview and SQL do this in roughly the same way: build a list of existing Customer ID / Product ID combinations, use this list to select customers for each row, and then find all transactions associated with those customers. But the mechanics are quite different.
In QlikView, all that’s required is to extract a copy of the original records. This keeps the same field name for Customer ID so it can act as a key relating to the original data, but renames Product ID as Master Product so it can treated as an independent dimension. The extract is done in a brief script that loads the original data and creates the other table from it:
Columns: // this is the table name
load
Customer_ID,
Product_ID,
Revenue
from input_data.csv (ansi, txt, delimiter is ',', embedded labels); // this code will be generated by a wizard
Rows: // this is the table name
load
Customer_ID,
Product_ID as Master_Product
resident Columns;
After that, all that’s needed is to create a pivot table report in the QlikView interface by specifying the two dimensions and defining expressions for the cell contents: count (distinct Customer ID); count (Product ID), and sum(Revenue). QlikView automatically limits the counts to the records qualified for each cell by the dimension definitions.
SQL takes substantially more work. The original extract is similar, creating a table with Customer ID and Master Product. But more technical skill is needed: the user must know to use a “select distinct” command to avoid creating multiple records with the same Customer ID / Product ID combination. Multiple records would result in duplicate rows, and thus double-counting, when the list is later joined back to the original transactions. (QlikView gives the same, non-double-counted results whether or not “select distinct” is used to create its extract.)
Once the extract is created, SQL requires the user to create a table with records for the report. This must contain two records for each transaction: one where the original product is the Master Product, and other where it is the Product ID. This requires a left join (I think) of the extract table against the original transaction table: again, the user needs enough SQL skill to know which kind of join is needed and how to set it up.
Next, the SQL user must create the report values themselves. We’ve now reached the limits of my own SQL skills, but I think you need two selections. The first is a “group by” on the Master Product, Product ID and Customer ID fields for the customer counts. The second is another “group by” on just the Master Product and Product ID for the product counts and revenue. Then you need to join the customer counts back to the more summarized records. Perhaps this could all be done in a single pass, but, either way, it’s pretty trickly.
Finally, the SQL user must display the final results in a report. Presumably this would be done in a report writer that hides the technical details from the user. But somebody skilled will still need to set things up the first time around.
I trust it’s clear how much easier it will be to create this report in QlikView than SQL. QlikView required one table load and one extract. SQL required one table load, one extract, one join to create the report records, and one to three additional selects to create the final summaries. Anybody wanna race?
But this is a very simple example that barely scratches the surface of what users really want. For example, they’ll almost certainly ask to calculate Revenue per Customer. This will be simple for QlikTech: just add a report expression of sum(Revenue) / count(distinct Customer_ID). (Actually, since QlikView lets you name the expressions and then use the names in other expressions, the formula would probably be something simpler still, like “Revenue / CustomerCount”.) SQL will probably need another data pass after the totals are created to do the calculation. Perhaps a good reporting tool will avoid this or at least hide it from the user. But the point is that QlikTech lets you add calculations without any changes to the files, and thus without any advance planning.
Another thing users are likely to want is row and column totals. These are conceptually tricky because you can’t simply add up the cell values. For the row totals, the same customer may appear in multiple columns, so you need to eliminate those duplicates to get correct values for customer count and revenue per customer. For the column totals, you need to remove transactions that appear on two rows (one where they are the Master Product, and other where they are the Product_ID). QlikTech automatically handles both situations because it is dynamically calculating the totals from the original data. But SQL created several intermediate tables, so the connection to the original data is lost. Most likely, SQL will need another set of selections and joins to get the correct totals.
QlikTech’s approach becomes even more of an advantage when users start drilling into the data. For example, they’re likely to select transactions related to particular products or on unrelated dimensions such as customer type. Again, since it works directly from the transaction details, QlikVeiw will instantly give correct values (including totals) for these subsets. SQL must rerun at least some of its selections and aggregations.
But there’s more. When we built the cross sell report for our client, we split results based on the number of total purchases made by each customer. We did this without any file manipulation, by adding a “calculated dimension” to the report: aggr(count (Product ID), Customer ID). Admittedly, this isn’t something you’d expect a casual user to know, but I personally figured it out just looking at the help files. It’s certainly simpler than how you’d do it in SQL, which is probably to count the transactions for each customer, post the resulting value on the transaction records or a customer-level extract file, and rebuild the report.
I could go on, but hope I’ve made the point: the more you want to do, the greater the advantage of doing it in QlikView. Since people in the real world want to do lots of things, the real world advantage of QlikTech is tremendous. Quod Erat Demonstrandum.
(disclaimer: although Client X Client is a QlikTech reseller, contents of this blog are solely the responsibility of the author.)
This report answers a common marketing question: which products do customers tend to purchase together, and how do customers who purchase particular combinations of products behave? (Ok, two questions.)
The report this begins with a set of transaction records coded with a Customer ID, Product ID, and Revenue. The trick is to identify all pairs among these records that have the same Customer ID. Physically, the resulting report is a matrix with products both as column and row headings. Each cell will report on customers who purchased the pair of products indicated by the row and column headers. Cell contents will be the number of customers, number of purchases of the product in the column header, and revenue of those purchases. (We also want row and column totals, but that’s a little complicated so let’s get back to that later.)
Since each record relates to the purchase of a single product, a simple cross tab of the input data won’t provide the information we want. Rather, we need to first identify all customers who purchased a particular product and group them on the same row. Columns will then report on all the other products they purchased.
Conceptually, Qlikview and SQL do this in roughly the same way: build a list of existing Customer ID / Product ID combinations, use this list to select customers for each row, and then find all transactions associated with those customers. But the mechanics are quite different.
In QlikView, all that’s required is to extract a copy of the original records. This keeps the same field name for Customer ID so it can act as a key relating to the original data, but renames Product ID as Master Product so it can treated as an independent dimension. The extract is done in a brief script that loads the original data and creates the other table from it:
Columns: // this is the table name
load
Customer_ID,
Product_ID,
Revenue
from input_data.csv (ansi, txt, delimiter is ',', embedded labels); // this code will be generated by a wizard
Rows: // this is the table name
load
Customer_ID,
Product_ID as Master_Product
resident Columns;
After that, all that’s needed is to create a pivot table report in the QlikView interface by specifying the two dimensions and defining expressions for the cell contents: count (distinct Customer ID); count (Product ID), and sum(Revenue). QlikView automatically limits the counts to the records qualified for each cell by the dimension definitions.
SQL takes substantially more work. The original extract is similar, creating a table with Customer ID and Master Product. But more technical skill is needed: the user must know to use a “select distinct” command to avoid creating multiple records with the same Customer ID / Product ID combination. Multiple records would result in duplicate rows, and thus double-counting, when the list is later joined back to the original transactions. (QlikView gives the same, non-double-counted results whether or not “select distinct” is used to create its extract.)
Once the extract is created, SQL requires the user to create a table with records for the report. This must contain two records for each transaction: one where the original product is the Master Product, and other where it is the Product ID. This requires a left join (I think) of the extract table against the original transaction table: again, the user needs enough SQL skill to know which kind of join is needed and how to set it up.
Next, the SQL user must create the report values themselves. We’ve now reached the limits of my own SQL skills, but I think you need two selections. The first is a “group by” on the Master Product, Product ID and Customer ID fields for the customer counts. The second is another “group by” on just the Master Product and Product ID for the product counts and revenue. Then you need to join the customer counts back to the more summarized records. Perhaps this could all be done in a single pass, but, either way, it’s pretty trickly.
Finally, the SQL user must display the final results in a report. Presumably this would be done in a report writer that hides the technical details from the user. But somebody skilled will still need to set things up the first time around.
I trust it’s clear how much easier it will be to create this report in QlikView than SQL. QlikView required one table load and one extract. SQL required one table load, one extract, one join to create the report records, and one to three additional selects to create the final summaries. Anybody wanna race?
But this is a very simple example that barely scratches the surface of what users really want. For example, they’ll almost certainly ask to calculate Revenue per Customer. This will be simple for QlikTech: just add a report expression of sum(Revenue) / count(distinct Customer_ID). (Actually, since QlikView lets you name the expressions and then use the names in other expressions, the formula would probably be something simpler still, like “Revenue / CustomerCount”.) SQL will probably need another data pass after the totals are created to do the calculation. Perhaps a good reporting tool will avoid this or at least hide it from the user. But the point is that QlikTech lets you add calculations without any changes to the files, and thus without any advance planning.
Another thing users are likely to want is row and column totals. These are conceptually tricky because you can’t simply add up the cell values. For the row totals, the same customer may appear in multiple columns, so you need to eliminate those duplicates to get correct values for customer count and revenue per customer. For the column totals, you need to remove transactions that appear on two rows (one where they are the Master Product, and other where they are the Product_ID). QlikTech automatically handles both situations because it is dynamically calculating the totals from the original data. But SQL created several intermediate tables, so the connection to the original data is lost. Most likely, SQL will need another set of selections and joins to get the correct totals.
QlikTech’s approach becomes even more of an advantage when users start drilling into the data. For example, they’re likely to select transactions related to particular products or on unrelated dimensions such as customer type. Again, since it works directly from the transaction details, QlikVeiw will instantly give correct values (including totals) for these subsets. SQL must rerun at least some of its selections and aggregations.
But there’s more. When we built the cross sell report for our client, we split results based on the number of total purchases made by each customer. We did this without any file manipulation, by adding a “calculated dimension” to the report: aggr(count (Product ID), Customer ID). Admittedly, this isn’t something you’d expect a casual user to know, but I personally figured it out just looking at the help files. It’s certainly simpler than how you’d do it in SQL, which is probably to count the transactions for each customer, post the resulting value on the transaction records or a customer-level extract file, and rebuild the report.
I could go on, but hope I’ve made the point: the more you want to do, the greater the advantage of doing it in QlikView. Since people in the real world want to do lots of things, the real world advantage of QlikTech is tremendous. Quod Erat Demonstrandum.
(disclaimer: although Client X Client is a QlikTech reseller, contents of this blog are solely the responsibility of the author.)
Labels:
business intelligence,
qliktech,
qlikview,
software selection
Friday, August 03, 2007
What Makes QlikTech So Good?
To carry on a bit with yesterday’s topic—QlikTech fascinates me on two levels: first, because it is such a powerful technology, and second because it’s a real-time case study in how a superior technology penetrates an established market. The general topic of diffusion of innovation has always intrigued me, and it would be fun to map QlikView against the usual models (hype curve, chasm crossing, tipping point, etc.) in a future post. Perhaps I shall.
But I think it’s important to first explain exactly just what makes QlikView so good. General statements about speed and ease of development are discounted by most IT professionals because they’ve heard them all before. Benchmark tests, while slightly more concrete, are also suspect because they can be designed to favor whoever sponsors them. User case studies may be most convincing evidence, but resemble the testimonials for weight-loss programs: they are obviously selected by the vendor and may represent atypical cases. Plus, you don’t know what else was going on that contributed to the results.
QlikTech itself has recognized all this and adopted “seeing is believing” as their strategy: rather than try to convince people how good they are, they show them with Webinars, pre-built demonstrations, detailed tutorials, documentation, and, most important, a fully-functional trial version. What they barely do is discuss the technology itself.
This is an effective strategy with early adopters, who like to get their hands dirty and are seeking a “game changing” improvement in capabilities. But while it creates evangelists, it doesn’t give them anything beyond than own personal experience to testify to the product’s value. So most QlikTech users find themselves making exactly the sort of generic claims about speed and ease of use that are so easily discounted by those unfamiliar with the product. If the individual making the claims has personal credibility, or better still independent decision-making authority, this is good enough to sell the product. But if QlikTech is competing against other solutions that are better known and perhaps more compatible with existing staff skills, a single enthusiastic advocate may not win out—even though they happen to be backed by the truth.
What they need is a story: a convincing explanation of WHY QlikTech is better. Maybe this is only important for certain types of decision-makers—call them skeptics or analytical or rationalists or whatever. But this is a pretty common sort of person in IT departments. Some of them are almost physically uncomfortable with the raving enthusiasm that QlikView can produce.
So let me try to articulate exactly what makes QlikView so good. The underlying technology is what QlikTech calls an “associative” database, meaning data values are directly linked with related values, rather than using the traditional table-and-row organization of a relational database. (Yes, that’s pretty vague—as I say, the company doesn’t explain it in detail. Perhaps their U.S. Patent [number 6,236,986 B1, issued in 2001] would help but I haven’t looked. I don’t think QlikTech uses “associative” in the same way as Simon Williams of LazySoft, which is where Google and Wikipedia point go when you query the term.)
Whatever the technical details, the result of QlikTech’s method is that users can select any value of any data element and get a list of all other values on records associated with that element. So, to take a trivial example, selecting a date could give a list of products ordered on that date. You could do that in SQL too, but let’s say the date is on a header record while the product ID is in a detail record. You’d have to set up a join between the two—easy if you know SQL, but otherwise inaccessible. And if you had a longer trail of relations the SQL gets uglier: let’s say the order headers were linked to customer IDs which were linked to customer accounts which were linked to addresses, and you wanted to find products sold in New Jersey. That’s a whole lot of joining going on. Or if you wanted to go the other way: find people in New Jersey who bought a particular product. In QlikTech, you simply select the state or the product ID, and that’s that.
Why is this a big deal? After all, plenty of SQL-based tools can generate that query for non-technical users who don’t know SQL. But those tools have to be set up by somebody, who has to design the database tables, define the joins, and very likely specify which data elements are available and how they’re presented. That somebody is a skilled technician, or probably several technicians (data architects, database administrators, query builders, etc.). QlikTech needs none of that because it’s not generating SQL code to begin with. Instead, users just load the data and the system automatically (and immediately) makes it available. Where multiple tables are involved, the system automatically joins them on fields with matching names. So, okay, someobody does need to know enough to name the fields correctly – but that’s just all the skill required..
The advantages really become apparent when you think about the work needed to set up a serious business intelligence system. The real work in deploying a Cognos or BusinessObjects is defining the dimensions, measures, drill paths, and so on, so the system can generate SQL queries or the prebuilt cubes needed to avoid those queries. Even minor changes like adding a new dimension are a big deal. All that effort simply goes away in QlikTech. Basically, you load the raw data and start building reports, drawing graphs, or doing whatever you need to extract the information you want. This is why development time is cut so dramatically and why developers need so little training.
Of course, QlikView’s tools for building reports and charts are important, and they’re very easy to use as well (basically all point-and-click). But that’s just icing on the cake—they’re not really so different from similar tools that sit on top of SQL or multi-dimensional databases.
The other advantages cited by QlikTech users are speed and scalability. These are simpler to explain: the database sits in memory. The associative approach provides some help here, too, since it reducing storage requirements by removing redundant occurrences of each data value and by storing the data as binary codes. But the main reason QlikView is incredibly fast is that the data is held in memory. The scalability part comes in with 64 bit processors, which can address pretty much any amount of memory. It’s still necessary to stress that QlikView isn’t just putting SQL tables into memory: it’s storing the associative structures, with all their ease of use advantages. This is an important distinction between QlikTech and other in-memory systems.
I’ve skipped over other benefits of QlikView; it really is a very rich and well thought out system. Perhaps I’ll write about them some other time. The key point for now is that people need to understand QlikView using a fundamentally different database technology, one that hugely simplifies application development by making the normal database design tasks unnecessary. The fantastic claims for QlikTech only become plausible once you recognize that this difference is what makes them possible.
(disclaimer: although Client X Client is a QlikTech reseller, they have no responsibility for the contents of this blog.)
But I think it’s important to first explain exactly just what makes QlikView so good. General statements about speed and ease of development are discounted by most IT professionals because they’ve heard them all before. Benchmark tests, while slightly more concrete, are also suspect because they can be designed to favor whoever sponsors them. User case studies may be most convincing evidence, but resemble the testimonials for weight-loss programs: they are obviously selected by the vendor and may represent atypical cases. Plus, you don’t know what else was going on that contributed to the results.
QlikTech itself has recognized all this and adopted “seeing is believing” as their strategy: rather than try to convince people how good they are, they show them with Webinars, pre-built demonstrations, detailed tutorials, documentation, and, most important, a fully-functional trial version. What they barely do is discuss the technology itself.
This is an effective strategy with early adopters, who like to get their hands dirty and are seeking a “game changing” improvement in capabilities. But while it creates evangelists, it doesn’t give them anything beyond than own personal experience to testify to the product’s value. So most QlikTech users find themselves making exactly the sort of generic claims about speed and ease of use that are so easily discounted by those unfamiliar with the product. If the individual making the claims has personal credibility, or better still independent decision-making authority, this is good enough to sell the product. But if QlikTech is competing against other solutions that are better known and perhaps more compatible with existing staff skills, a single enthusiastic advocate may not win out—even though they happen to be backed by the truth.
What they need is a story: a convincing explanation of WHY QlikTech is better. Maybe this is only important for certain types of decision-makers—call them skeptics or analytical or rationalists or whatever. But this is a pretty common sort of person in IT departments. Some of them are almost physically uncomfortable with the raving enthusiasm that QlikView can produce.
So let me try to articulate exactly what makes QlikView so good. The underlying technology is what QlikTech calls an “associative” database, meaning data values are directly linked with related values, rather than using the traditional table-and-row organization of a relational database. (Yes, that’s pretty vague—as I say, the company doesn’t explain it in detail. Perhaps their U.S. Patent [number 6,236,986 B1, issued in 2001] would help but I haven’t looked. I don’t think QlikTech uses “associative” in the same way as Simon Williams of LazySoft, which is where Google and Wikipedia point go when you query the term.)
Whatever the technical details, the result of QlikTech’s method is that users can select any value of any data element and get a list of all other values on records associated with that element. So, to take a trivial example, selecting a date could give a list of products ordered on that date. You could do that in SQL too, but let’s say the date is on a header record while the product ID is in a detail record. You’d have to set up a join between the two—easy if you know SQL, but otherwise inaccessible. And if you had a longer trail of relations the SQL gets uglier: let’s say the order headers were linked to customer IDs which were linked to customer accounts which were linked to addresses, and you wanted to find products sold in New Jersey. That’s a whole lot of joining going on. Or if you wanted to go the other way: find people in New Jersey who bought a particular product. In QlikTech, you simply select the state or the product ID, and that’s that.
Why is this a big deal? After all, plenty of SQL-based tools can generate that query for non-technical users who don’t know SQL. But those tools have to be set up by somebody, who has to design the database tables, define the joins, and very likely specify which data elements are available and how they’re presented. That somebody is a skilled technician, or probably several technicians (data architects, database administrators, query builders, etc.). QlikTech needs none of that because it’s not generating SQL code to begin with. Instead, users just load the data and the system automatically (and immediately) makes it available. Where multiple tables are involved, the system automatically joins them on fields with matching names. So, okay, someobody does need to know enough to name the fields correctly – but that’s just all the skill required..
The advantages really become apparent when you think about the work needed to set up a serious business intelligence system. The real work in deploying a Cognos or BusinessObjects is defining the dimensions, measures, drill paths, and so on, so the system can generate SQL queries or the prebuilt cubes needed to avoid those queries. Even minor changes like adding a new dimension are a big deal. All that effort simply goes away in QlikTech. Basically, you load the raw data and start building reports, drawing graphs, or doing whatever you need to extract the information you want. This is why development time is cut so dramatically and why developers need so little training.
Of course, QlikView’s tools for building reports and charts are important, and they’re very easy to use as well (basically all point-and-click). But that’s just icing on the cake—they’re not really so different from similar tools that sit on top of SQL or multi-dimensional databases.
The other advantages cited by QlikTech users are speed and scalability. These are simpler to explain: the database sits in memory. The associative approach provides some help here, too, since it reducing storage requirements by removing redundant occurrences of each data value and by storing the data as binary codes. But the main reason QlikView is incredibly fast is that the data is held in memory. The scalability part comes in with 64 bit processors, which can address pretty much any amount of memory. It’s still necessary to stress that QlikView isn’t just putting SQL tables into memory: it’s storing the associative structures, with all their ease of use advantages. This is an important distinction between QlikTech and other in-memory systems.
I’ve skipped over other benefits of QlikView; it really is a very rich and well thought out system. Perhaps I’ll write about them some other time. The key point for now is that people need to understand QlikView using a fundamentally different database technology, one that hugely simplifies application development by making the normal database design tasks unnecessary. The fantastic claims for QlikTech only become plausible once you recognize that this difference is what makes them possible.
(disclaimer: although Client X Client is a QlikTech reseller, they have no responsibility for the contents of this blog.)
Thursday, August 02, 2007
Notes from the QlikTech Underground
You may have noticed that I haven’t been posting recently. The reason is almost silly: I got to thinking about the suggestion in The Power Performance Grid that each person should identify a single measure most important to their success, and recognized that the number of blog posts certainly isn’t mine. (That may actually be a misinterpretation of the book’s message, but the damage is done.)
Plus, I’ve been busy with other things—in particular, a pilot QlikTech implementation at a Very Large Company that shall remain nameless. Results have been astonishing—we were able to deliver a cross sell analysis in hours that the client had been working on for years using conventional business intelligence technology. A client analyst, with no training beyond a written tutorial, was then able to extend that analysis with new reports, data views and drill-downs in an afternoon. Of course, it helped that the source data itself was already available, but QlikTech still removes a huge amount of effort from the delivery part of the process.
The IT world hasn’t quite recognized how revolutionary QlikTech is, but it’s starting to see the light: Gartner has begun covering them and there was a recent piece in InformationWeek. I’ll brag a bit and point out that my own coverage began much sooner: see my DM News review of July 2005 (written before we became resellers).
It will be interesting to watch the QlikTech story play out. There’s a theory that the big system integration consultancies won’t adopt QlikTech because it is too efficient: since projects that would have involved hundreds of billable hours can be completed in a day or two, the integrators won’t want to give up all that revenue. But I disagree for a couple of reasons: first of all, competitors (including internal IT) will start using QlikTech and the big firms will have to do the same to compete. Second, there is such a huge backlog of unmet needs for reporting systems that companies will still buy hundreds of hours of time; they’ll just get a lot more done for their money. Third, QlikTech will drive demand for technically-demanding data integration project to feed it information, and distribution infrastructures to use the results. These will still be big revenue generators for the integrators. So while the big integrators first reaction may be that QlikTech is a threat to their revenue, I’m pretty confident they’ll eventually see it gives them a way to deliver greater value to their clients and thus ultimately maintain or increase business volume.
I might post again tomorrow, but then I’ll be on vacation for two weeks. Enjoy the rest of the summer.
Plus, I’ve been busy with other things—in particular, a pilot QlikTech implementation at a Very Large Company that shall remain nameless. Results have been astonishing—we were able to deliver a cross sell analysis in hours that the client had been working on for years using conventional business intelligence technology. A client analyst, with no training beyond a written tutorial, was then able to extend that analysis with new reports, data views and drill-downs in an afternoon. Of course, it helped that the source data itself was already available, but QlikTech still removes a huge amount of effort from the delivery part of the process.
The IT world hasn’t quite recognized how revolutionary QlikTech is, but it’s starting to see the light: Gartner has begun covering them and there was a recent piece in InformationWeek. I’ll brag a bit and point out that my own coverage began much sooner: see my DM News review of July 2005 (written before we became resellers).
It will be interesting to watch the QlikTech story play out. There’s a theory that the big system integration consultancies won’t adopt QlikTech because it is too efficient: since projects that would have involved hundreds of billable hours can be completed in a day or two, the integrators won’t want to give up all that revenue. But I disagree for a couple of reasons: first of all, competitors (including internal IT) will start using QlikTech and the big firms will have to do the same to compete. Second, there is such a huge backlog of unmet needs for reporting systems that companies will still buy hundreds of hours of time; they’ll just get a lot more done for their money. Third, QlikTech will drive demand for technically-demanding data integration project to feed it information, and distribution infrastructures to use the results. These will still be big revenue generators for the integrators. So while the big integrators first reaction may be that QlikTech is a threat to their revenue, I’m pretty confident they’ll eventually see it gives them a way to deliver greater value to their clients and thus ultimately maintain or increase business volume.
I might post again tomorrow, but then I’ll be on vacation for two weeks. Enjoy the rest of the summer.
Labels:
business intelligence,
qliktech,
qlikview,
software selection
Wednesday, July 11, 2007
More Attacks on Net Promoter Score
It seems to be open season on Fred Reichheld. For many years, his concept of Net Promoter Score as a critical predictor of business success has been questioned by marketers. The Internets are now buzzing with a recent academic study “A Longitudinal Examination of Net Promoter and Firm Revenue Growth” (Timothy L. Keiningham, Bruce Cooil, Tor Wallin Andreassen, & Lerzan Aksoy, Journal of Marketing, July 2007) that duplicated Reichheld’s research but “fails to replicate his assertions regarding the ‘clear superiority’ of Net Promoter compared with other measures in those industries.” See, for example, comments by Adelino de Almeida, Alan Mitchell, and Walter Carl. I didn’t see an immediate rebuttal on Reichheld’s own blog, although the blog does contain responses to other criticisms.
There’s a significant contrast between the Net Promoter approach – focusing on a single outcome measure – and the Balanced Scorecard approach of viewing multiple predictive metrics. I think the Balanced Scorecard approach, particularly if cascaded down to individuals see the strategic measures they can directly affect, makes a lot more sense.
There’s a significant contrast between the Net Promoter approach – focusing on a single outcome measure – and the Balanced Scorecard approach of viewing multiple predictive metrics. I think the Balanced Scorecard approach, particularly if cascaded down to individuals see the strategic measures they can directly affect, makes a lot more sense.
Labels:
net promoter score
Tuesday, July 10, 2007
The Performance Power Grid Doesn't Impress
Every so often, someone offers to send me a review copy of a new business book. Usually I don’t accept, but given my current interest in performance management techniques, a headline touting “Six Reasons the Performance Power Grid Trumps the Balanced Scorecard” was intriguing. After all, Balanced Scorecard is the dominant approach to performance management today—something that becomes clear when you read other books on the topic and find that most have adopted its framework (with or without acknowledgement). So it seemed worth looking at something that claims to supersede it.
I therefore asked for a copy of The Performance Power Grid by David F. Giannetto and Anthony Zecca (John Wiley & Sons, 2006), and promised to mention it in this blog.
The book has its merits: it’s short and the type is big. On the other hand, there are no pictures and very few illustrations.
As to content: I didn’t exactly disagree with it, but nor did I find it particularly enlightening. The authors’ fundamental point is that organizations should build reporting systems that focus workers at each level on the tasks that are most important for business success. Well, okay. Balanced Scorecard says the same thing—the authors seem to have misinterpreted Balanced Scorecard to be about non-strategic metrics, and then criticize it based on that misinterpretation. The Performance Power Grid does seem to focus a bit more on immediate feedback to lower-level workers than Balanced Scorecard, but a fully-developed Balanced Scorecard system definitely includes “cascading” scorecards that reach all workers.
What I really found frustrating about the book was a lack of concrete information on exactly what goes into its desired system. Somehow you pick your “power drivers” to populate a “performance portal” on your “power grid” on a “(there’s a lot of “power” going on here), and provide analytics so workers can see why things are happening and how they can change them. But exactly what this portal looks like and which data are presented for analysis, isn’t explained in any detail.
The authors might argue that the specifics are unique to each company. But even so, a few extended examples and some general guidelines would be most helpful. The book does actually abound in examples, but most are either historical analogies (Battle of Gettysburg, Apollo 13) or extremely simplistic (a package delivery company focusing on timely package delivery). Then, just when you think maybe the point is each worker should focus on one or two things, the authors casually mention “10 to 15 metrics for each employee that they themselves can affect and are responsible for.” That’s a lot of metrics. I sure would have liked to see a sample list.
On the other hand, the authors are consultants who say their process has been used with great success. My guess is this has less to do with the particular approach than that any method will work if it leads companies to focus relentlessly on key business drivers. It never hurts to repeat that lesson, although I wouldn’t claim it’s a new one.
I therefore asked for a copy of The Performance Power Grid by David F. Giannetto and Anthony Zecca (John Wiley & Sons, 2006), and promised to mention it in this blog.
The book has its merits: it’s short and the type is big. On the other hand, there are no pictures and very few illustrations.
As to content: I didn’t exactly disagree with it, but nor did I find it particularly enlightening. The authors’ fundamental point is that organizations should build reporting systems that focus workers at each level on the tasks that are most important for business success. Well, okay. Balanced Scorecard says the same thing—the authors seem to have misinterpreted Balanced Scorecard to be about non-strategic metrics, and then criticize it based on that misinterpretation. The Performance Power Grid does seem to focus a bit more on immediate feedback to lower-level workers than Balanced Scorecard, but a fully-developed Balanced Scorecard system definitely includes “cascading” scorecards that reach all workers.
What I really found frustrating about the book was a lack of concrete information on exactly what goes into its desired system. Somehow you pick your “power drivers” to populate a “performance portal” on your “power grid” on a “(there’s a lot of “power” going on here), and provide analytics so workers can see why things are happening and how they can change them. But exactly what this portal looks like and which data are presented for analysis, isn’t explained in any detail.
The authors might argue that the specifics are unique to each company. But even so, a few extended examples and some general guidelines would be most helpful. The book does actually abound in examples, but most are either historical analogies (Battle of Gettysburg, Apollo 13) or extremely simplistic (a package delivery company focusing on timely package delivery). Then, just when you think maybe the point is each worker should focus on one or two things, the authors casually mention “10 to 15 metrics for each employee that they themselves can affect and are responsible for.” That’s a lot of metrics. I sure would have liked to see a sample list.
On the other hand, the authors are consultants who say their process has been used with great success. My guess is this has less to do with the particular approach than that any method will work if it leads companies to focus relentlessly on key business drivers. It never hurts to repeat that lesson, although I wouldn’t claim it’s a new one.
Monday, July 09, 2007
APQC Provides 3 LTV Case Studies
One of the common criticisms of lifetime value is that it has no practical applications. You and I know this is false, but some people still need convincing. The APQC formerly American Productivity and Quality Council) recently published “Insights into Using Customer Valuation Strategies to Drive Growth and Increase Profits from Aon Risk Services, Sprint Nextel, and a Leading Brokerage Services Firm,” which provides three mini-case histories that may help.
Aon created profitability scorecards for 10,000 insurance customers. The key findings were variations in customer service costs, which had a major impact on profitability. The cost estimates were based on surveys of customer-facing personnel. Results were used for planning, pricing, and to change how clients were serviced, and have yielded substantial financial gains.
Sprint Nextel developed a lifetime value model for 45 million wireless customers, classified by segments and services and using “a combination of historical costs, costing assumptions, cost tracing techniques, and activity-based allocations”. The model is used to assess the financial impact of proposed marketing programs and for strategic planning.
The brokerage firm also built a lifetime value model for customer segments, which were defined by trading behaviors, asset levels, portfolio mix and demographics. Value is determined by the products and services used by each segment, and in particular by the costs associated with different service channels. The LTV model is used to evaluate the three-year impact of marketing decisions such as pricing and advertising.
The paper also identifies critical success factors at each company: senior management support, organizational buy-in and profitability analysis technology at Aon; model buy-in at Sprint Nextel; and the model, profitability analysis and customer data at the brokerage firm.
My own take is that this paper reinforces the point that lifetime value is useful only when looking at individual customers or customer segments: a single lifetime value figure for all customers is of little utility. It also reinforces the need to model that incremental impact of different marketing programs, or of any change in the customer experience. Although the Aon and brokerage models are not described in detail, it appears they take expected customer behaviors as inputs and then calculate the financial impact. This is less demanding than having a model forecast the behavior changes themselves. Since it clearly delivers considerable value on its own, it’s a good first step in a larger project towards a comprehensive lifetime value-based management approach.
Aon created profitability scorecards for 10,000 insurance customers. The key findings were variations in customer service costs, which had a major impact on profitability. The cost estimates were based on surveys of customer-facing personnel. Results were used for planning, pricing, and to change how clients were serviced, and have yielded substantial financial gains.
Sprint Nextel developed a lifetime value model for 45 million wireless customers, classified by segments and services and using “a combination of historical costs, costing assumptions, cost tracing techniques, and activity-based allocations”. The model is used to assess the financial impact of proposed marketing programs and for strategic planning.
The brokerage firm also built a lifetime value model for customer segments, which were defined by trading behaviors, asset levels, portfolio mix and demographics. Value is determined by the products and services used by each segment, and in particular by the costs associated with different service channels. The LTV model is used to evaluate the three-year impact of marketing decisions such as pricing and advertising.
The paper also identifies critical success factors at each company: senior management support, organizational buy-in and profitability analysis technology at Aon; model buy-in at Sprint Nextel; and the model, profitability analysis and customer data at the brokerage firm.
My own take is that this paper reinforces the point that lifetime value is useful only when looking at individual customers or customer segments: a single lifetime value figure for all customers is of little utility. It also reinforces the need to model that incremental impact of different marketing programs, or of any change in the customer experience. Although the Aon and brokerage models are not described in detail, it appears they take expected customer behaviors as inputs and then calculate the financial impact. This is less demanding than having a model forecast the behavior changes themselves. Since it clearly delivers considerable value on its own, it’s a good first step in a larger project towards a comprehensive lifetime value-based management approach.
Friday, July 06, 2007
Sources of Benchmark Studies
Somehow I found myself researching benchmarking vendors this morning. Usually I think of the APQC, formerly American Productivity and Quality Center, as the source of such studies. They do seem to be the leader and their Web site provides lots of information on the topic.
But a few other names came up too. (I’ve excluded some specialists in particular fields such as customer service or health care.):
Kaiser Associates
Reset Group (New Zealand)
Resource Services Inc.
Best Practices LLC
MarketingSherpa
MarketingProfs
Cornerstone (banking)
Some of these simply do Web surveys. I wouldn’t trust those without closely examining the technique because it’s too easy for people to give inaccurate replies. Others do more traditional in-depth studies. The studies may be within a single organization, among firms in a single industry, or across industries.
But a few other names came up too. (I’ve excluded some specialists in particular fields such as customer service or health care.):
Kaiser Associates
Reset Group (New Zealand)
Resource Services Inc.
Best Practices LLC
MarketingSherpa
MarketingProfs
Cornerstone (banking)
Some of these simply do Web surveys. I wouldn’t trust those without closely examining the technique because it’s too easy for people to give inaccurate replies. Others do more traditional in-depth studies. The studies may be within a single organization, among firms in a single industry, or across industries.
Thursday, July 05, 2007
Is Marketing ROI Important?
You may have noticed that my discussions of marketing performance measurement have not stressed Return on Marketing Investment as an important metric. Frankly, this surprises even me: ROMI appears every time I jot down a list of such measures, but it never quite fits into the final schemes. To use the categories I proposed yesterday, ROMI isn’t a measure of business value, of strategic alignment, or of marketing efficiency. I guess it comes closest to the efficiency category, but the efficiency measures tend to be more simple and specific, such as a cost per unit or time per activity. Although ROMI could be considered the ultimate measure of marketing efficiency, it is too abstract to fit easily into this group.
Still, my silence doesn’t mean I haven’t been giving ROMI much thought. (I am, after all, a man of many secrets.) In fact, I spent some time earlier this week revisiting what I assume is the standard work on the topic, James Lenskold’s excellent Marketing ROI. Lenksold takes a rigorous and honest view of the subject, which means he discusses the challenges as well as the advantages. I came away feeling ROMI faces two major issues: the practical one of identifying exactly which results are caused by a particular marketing investment, and the more conceptual one of how to deal with benefits that depend in part on future marketing activities.
The practical issue of linking results to investments has no simple solution: there’s no getting around the fact that life is complex. But any measure of marketing performance faces the same challenge, so I don’t see this as a flaw in ROMI itself. The only thing I would say is that ROMI may give a false illusion of precision that persists no matter how many caveats are presented along with the numbers.
How to treat future, contingent benefits is also a problem any methodology must face. Lenskold offers several options, from treating several investments into a single investment for analytical purposes, to reporting the future benefits separately from the immediate ROMI, to treating investments with long-term results (e.g. brand building) as overhead rather than marketing. Since he covers pretty much all the possibilities, one of them must be the right answer (or, more likely, different answers will be right in different situations). My own attitude is this isn’t something to agonize over: all marketing decisions (indeed, all business decisions) require assumptions about the future, so it’s not necessary to isolate future marketing programs as something to treat separate from, say, future product costs. Both will result in part from future business decisions. When I calculate lifetime value, I certainly include the results of future marketing efforts in the value stream. Were I to calculate ROMI, I’d do the same.
So here's what it comes down to. Even though I'm attracted to the idea of ROMI, I find it isn't concrete enough to replace specific marketing efficiency measures like cost per order, but is still too narrow to provide the strategic insight gained from lifetime value. (This applies unless you define ROMI to include the results of future marketing decisions, but then it's really the same as incremental LTV.)
Now you know why ROMI never makes my list of marketing performance measures.
Still, my silence doesn’t mean I haven’t been giving ROMI much thought. (I am, after all, a man of many secrets.) In fact, I spent some time earlier this week revisiting what I assume is the standard work on the topic, James Lenskold’s excellent Marketing ROI. Lenksold takes a rigorous and honest view of the subject, which means he discusses the challenges as well as the advantages. I came away feeling ROMI faces two major issues: the practical one of identifying exactly which results are caused by a particular marketing investment, and the more conceptual one of how to deal with benefits that depend in part on future marketing activities.
The practical issue of linking results to investments has no simple solution: there’s no getting around the fact that life is complex. But any measure of marketing performance faces the same challenge, so I don’t see this as a flaw in ROMI itself. The only thing I would say is that ROMI may give a false illusion of precision that persists no matter how many caveats are presented along with the numbers.
How to treat future, contingent benefits is also a problem any methodology must face. Lenskold offers several options, from treating several investments into a single investment for analytical purposes, to reporting the future benefits separately from the immediate ROMI, to treating investments with long-term results (e.g. brand building) as overhead rather than marketing. Since he covers pretty much all the possibilities, one of them must be the right answer (or, more likely, different answers will be right in different situations). My own attitude is this isn’t something to agonize over: all marketing decisions (indeed, all business decisions) require assumptions about the future, so it’s not necessary to isolate future marketing programs as something to treat separate from, say, future product costs. Both will result in part from future business decisions. When I calculate lifetime value, I certainly include the results of future marketing efforts in the value stream. Were I to calculate ROMI, I’d do the same.
So here's what it comes down to. Even though I'm attracted to the idea of ROMI, I find it isn't concrete enough to replace specific marketing efficiency measures like cost per order, but is still too narrow to provide the strategic insight gained from lifetime value. (This applies unless you define ROMI to include the results of future marketing decisions, but then it's really the same as incremental LTV.)
Now you know why ROMI never makes my list of marketing performance measures.
Tuesday, July 03, 2007
Marketing Performance: Plan, Simulate, Measure
Let’s dig a bit deeper into the relationships I mentioned yesterday among systems for marketing performance measurement, marketing planning, and marketing simulation (e.g., marketing mix models, lifetime value models). You can think of marketing performance measures as falling into three broad categories:
- measures that show how marketing investments impact business value, such as profits or stock price
- measures that show how marketing investments align with business strategy
- measures that show how efficiently marketing is doing its job (both in terms of internal operations and of cost per unit – impression, response, revenue, etc.)
We can put aside the middle category, which is really a special case related to Balanced Scorecard concepts. Measures in this are traditional Balanced Scorecard measures of business results and performance drivers. By design, the Balanced Scorecard focuses on just a few of these measures, so it is not concerned with the details captured in the marketing planning system. (Balanced Scorecard proponents recognize the importance of such plans; they just want to manage them elsewhere). Also, as I’ve previously commented, Balanced Scorecard systems don’t attempt to precisely correlate performance drivers to results, even though they do use strategy maps to identify general causal relationships between them. So Balanced Scorecard systems also don’t need marketing simulation systems, which do attempt to define those correlations.
This leaves the high-level measures of business value and the low-level measures of efficiency. Clearly the low-level measures rely on detailed plans, since you can only measure efficiency by looking at performance of individual projects and then the project mix. (For example: measuring cost per order makes no sense unless you specify the product, channel, offer and other specifics. Only then can you determine whether results for a particular campaign were too high or too low, by comparing them with similar campaigns.)
But it turns out that even the high-level measures need to work from detailed plans. The problem here is that aggregate measures of marketing activity are too broad to correlate meaningfully with aggregate business results. Different marketing activities affect different customer segments, different business measures (revenue, margins, service costs, satisfaction, attrition), and different time periods (some have immediate effects, others are long-term investments). Past marketing investments also affect current period results. So a simple correlation of this period marketing costs vs. this period business results makes no sense. Instead, you need to look at the details of specific marketing efforts, past and present, to estimate how they each contribute to current business results. (And you need to be reasonably humble in recognizing that you’ll never really account for results precisely—which is why marketing mix models start with a base level of revenue that would occur even if you did nothing.) The logical place to capture those detailed marketing effort is the marketing planning system.
The role of simulation systems in high-level performance reporting is to convert these detailed marketing plans into estimates of business impact from each program. The program results can then be aggregated to show the impact of marketing as a whole.
Of course, if the simulation system is really evaluating individual projects, it can also provide measures for the low-level marketing efficiency reports. In fact, having those sorts of measures is the only way the low-level system can get beyond comparing programs only against other similar programs, to allow comparisons across different program types. This is absolutely essential if marketers are going to shift resources from low- to high-yield activities and therefore make sure they are optimizing return on the marketing budget as a whole. (Concretely: if I want to compare direct mail to email, then looking at response rate won’t do. But if I add a simulation system that calculates the lifetime value acquired from investments in both, I can decide which one to choose.)
So it turns out that planning and simulation systems are both necessary for both high-level and low-level marketing performance measurement. The obvious corollary is that the planning system must capture the data needed for the simulation system to work. This would include tags to identify the segments, time periods and outcomes the each program is intended to affect. Some of these will be part of the planning system already, but other items will be introduced only to make simulation work.
- measures that show how marketing investments impact business value, such as profits or stock price
- measures that show how marketing investments align with business strategy
- measures that show how efficiently marketing is doing its job (both in terms of internal operations and of cost per unit – impression, response, revenue, etc.)
We can put aside the middle category, which is really a special case related to Balanced Scorecard concepts. Measures in this are traditional Balanced Scorecard measures of business results and performance drivers. By design, the Balanced Scorecard focuses on just a few of these measures, so it is not concerned with the details captured in the marketing planning system. (Balanced Scorecard proponents recognize the importance of such plans; they just want to manage them elsewhere). Also, as I’ve previously commented, Balanced Scorecard systems don’t attempt to precisely correlate performance drivers to results, even though they do use strategy maps to identify general causal relationships between them. So Balanced Scorecard systems also don’t need marketing simulation systems, which do attempt to define those correlations.
This leaves the high-level measures of business value and the low-level measures of efficiency. Clearly the low-level measures rely on detailed plans, since you can only measure efficiency by looking at performance of individual projects and then the project mix. (For example: measuring cost per order makes no sense unless you specify the product, channel, offer and other specifics. Only then can you determine whether results for a particular campaign were too high or too low, by comparing them with similar campaigns.)
But it turns out that even the high-level measures need to work from detailed plans. The problem here is that aggregate measures of marketing activity are too broad to correlate meaningfully with aggregate business results. Different marketing activities affect different customer segments, different business measures (revenue, margins, service costs, satisfaction, attrition), and different time periods (some have immediate effects, others are long-term investments). Past marketing investments also affect current period results. So a simple correlation of this period marketing costs vs. this period business results makes no sense. Instead, you need to look at the details of specific marketing efforts, past and present, to estimate how they each contribute to current business results. (And you need to be reasonably humble in recognizing that you’ll never really account for results precisely—which is why marketing mix models start with a base level of revenue that would occur even if you did nothing.) The logical place to capture those detailed marketing effort is the marketing planning system.
The role of simulation systems in high-level performance reporting is to convert these detailed marketing plans into estimates of business impact from each program. The program results can then be aggregated to show the impact of marketing as a whole.
Of course, if the simulation system is really evaluating individual projects, it can also provide measures for the low-level marketing efficiency reports. In fact, having those sorts of measures is the only way the low-level system can get beyond comparing programs only against other similar programs, to allow comparisons across different program types. This is absolutely essential if marketers are going to shift resources from low- to high-yield activities and therefore make sure they are optimizing return on the marketing budget as a whole. (Concretely: if I want to compare direct mail to email, then looking at response rate won’t do. But if I add a simulation system that calculates the lifetime value acquired from investments in both, I can decide which one to choose.)
So it turns out that planning and simulation systems are both necessary for both high-level and low-level marketing performance measurement. The obvious corollary is that the planning system must capture the data needed for the simulation system to work. This would include tags to identify the segments, time periods and outcomes the each program is intended to affect. Some of these will be part of the planning system already, but other items will be introduced only to make simulation work.
Monday, July 02, 2007
Marketing Planning and Marketing Measurement: Surprisingly Separate
As part of my continuing research into marketing performance measurement, I’ve been looking at software vendors who provide marketing planning systems. I haven’t found any products that do marketing planning by itself. Instead, the function is part of larger systems. In order of increasing scope, these fall into three groups:
Marketing resource management:
- Aprimo
- MarketingPilot
- Assetlink
- Orbis Australian; active throughout Asia; just opened London office)
- MarketingCentral
- Xeed (Dutch; active throughout Europe)
Enterprise marketing:
- Unica
- SAS
- Teradata
- Alterian
Enterprise management:
- SAP
- Oracle/Siebel
- Infor
Few companies would buy an enterprise marketing or enterprise management system solely for its marketing planning module. Even marketing resource management software is primarily bought for other functions (mostly content management and program management). This makes sense in that most marketing planning comes down to aggregating information about the marketing programs that reside in these larger systems.
Such aggregations include comparisons across time periods, of budgets against actuals, and of different products and regions against each other. These are great for running marketing operations but don’t address larger strategic issues such as impact of marketing on customer attitudes or company value. Illustrating this connection requires analytical input from tools such as marketing mix models or business simulations. This is provided by measurement products like Upper Quadrant , Veridiem (now owned by SAS) and MMA Avista. Presumably we’ll see closer integ/ration between the two sets of products over time.
Marketing resource management:
- Aprimo
- MarketingPilot
- Assetlink
- Orbis Australian; active throughout Asia; just opened London office)
- MarketingCentral
- Xeed (Dutch; active throughout Europe)
Enterprise marketing:
- Unica
- SAS
- Teradata
- Alterian
Enterprise management:
- SAP
- Oracle/Siebel
- Infor
Few companies would buy an enterprise marketing or enterprise management system solely for its marketing planning module. Even marketing resource management software is primarily bought for other functions (mostly content management and program management). This makes sense in that most marketing planning comes down to aggregating information about the marketing programs that reside in these larger systems.
Such aggregations include comparisons across time periods, of budgets against actuals, and of different products and regions against each other. These are great for running marketing operations but don’t address larger strategic issues such as impact of marketing on customer attitudes or company value. Illustrating this connection requires analytical input from tools such as marketing mix models or business simulations. This is provided by measurement products like Upper Quadrant , Veridiem (now owned by SAS) and MMA Avista. Presumably we’ll see closer integ/ration between the two sets of products over time.
Friday, June 29, 2007
James Taylor on His New Book
A few months ago, James Taylor of Fair Isaac asked me to look over a proof of Smart (Enough) Systems, a book he has co-written with industry guru Neil Raden of Hired Brains. The topic, of course, is enterprise decision management, which the book explains in great detail. It has now been released (you can order through Amazon or James or Neil), so I asked James for a few comments to share.
What did you hope to accomplish with this book? Fame and fortune. Seriously, what I wanted to do was bring a whole bunch of threads and thoughts together in one place with enough space to develop ideas more fully. I have been writing about this topic a lot for several years and seen lots of great examples. The trouble is that a blog (www.edmblog.com) and articles only give you so much room – you tend to skim each topic. A book really let me and Neil delve deeper into the whys and hows of the topic. Hopefully the book will let people see how unnecessarily stupid their systems are and how a focus on the decisions within those systems can make them more useful.
What are the biggest obstacles to EDM and how can people overcome them?
- One is the belief that they need to develop “smart” systems and that this requires to-be-developed technology from the minds of researchers and science-fiction writers. Nothing could be further from the truth – the technology and approach to make systems be smart enough are well established and proven.
- Another is the failure to focus on decisions as critical aspects of their systems. Historically many decisions were taken manually or were not noticed at all. For instance, a call center manager might be put on the line to approve a fee refund for a good customer when the decision could have been taken by the system the call center representative was using without the need for a referral. That’s a unnecessarily manual decision. A hidden decision might be something like the options on an IVR system. Most companies make them the same for everyone yet once you know who is calling you could decide to give them a personalized set of options. Most companies don’t even notice this kind of decision and so take it poorly.
- Many companies have a hard time with “trusting” software and so like to have people make decisions. Yet the evidence is that the judicious use of automation for decisions can free up people to make the kinds of decisions they are really good at and let machines take the rest.
- Companies have become convinced that business intelligence means BI software and so they don’t think about using that data to make predictions of the future or the use of those predictions to improve production systems. This is changing slowly as people realize how little value they are getting out of looking backwards with their data instead of looking forwards.
Can EDM be deployed piecemeal (individual decisions) or does it need some overarching framework to understand each decision's long-term impact?
It can and should be deployed piecemeal. Like any approach it becomes easier once a framework is in place and part of an organizations standard methodology but local success with the automation and management of an individual decision is both possible and recommended for getting started.
The more of the basic building blocks of a modern enterprise architecture you have the better. Automated decisions are easier to embed if you are adopting SOA/BPM, easier to monitor if you have BI/Performance Management working and more accurate if your data is integrated and managed. None of these are pre-requisites for initial success though.
The book is very long. What did you leave out? Well, I think it is a perfect length! What we left out were detailed how-tos on the technology and a formal methodology/project plans for individual activities. The book pulls together various themes and technologies and shows how they work together but it does not replace the kind of detail you would get in a book on business rules or analytics nor does it replace the need for analytic and systems development methods be they agile or Unified Process or CRISP-DM.
What did you hope to accomplish with this book? Fame and fortune. Seriously, what I wanted to do was bring a whole bunch of threads and thoughts together in one place with enough space to develop ideas more fully. I have been writing about this topic a lot for several years and seen lots of great examples. The trouble is that a blog (www.edmblog.com) and articles only give you so much room – you tend to skim each topic. A book really let me and Neil delve deeper into the whys and hows of the topic. Hopefully the book will let people see how unnecessarily stupid their systems are and how a focus on the decisions within those systems can make them more useful.
What are the biggest obstacles to EDM and how can people overcome them?
- One is the belief that they need to develop “smart” systems and that this requires to-be-developed technology from the minds of researchers and science-fiction writers. Nothing could be further from the truth – the technology and approach to make systems be smart enough are well established and proven.
- Another is the failure to focus on decisions as critical aspects of their systems. Historically many decisions were taken manually or were not noticed at all. For instance, a call center manager might be put on the line to approve a fee refund for a good customer when the decision could have been taken by the system the call center representative was using without the need for a referral. That’s a unnecessarily manual decision. A hidden decision might be something like the options on an IVR system. Most companies make them the same for everyone yet once you know who is calling you could decide to give them a personalized set of options. Most companies don’t even notice this kind of decision and so take it poorly.
- Many companies have a hard time with “trusting” software and so like to have people make decisions. Yet the evidence is that the judicious use of automation for decisions can free up people to make the kinds of decisions they are really good at and let machines take the rest.
- Companies have become convinced that business intelligence means BI software and so they don’t think about using that data to make predictions of the future or the use of those predictions to improve production systems. This is changing slowly as people realize how little value they are getting out of looking backwards with their data instead of looking forwards.
Can EDM be deployed piecemeal (individual decisions) or does it need some overarching framework to understand each decision's long-term impact?
It can and should be deployed piecemeal. Like any approach it becomes easier once a framework is in place and part of an organizations standard methodology but local success with the automation and management of an individual decision is both possible and recommended for getting started.
The more of the basic building blocks of a modern enterprise architecture you have the better. Automated decisions are easier to embed if you are adopting SOA/BPM, easier to monitor if you have BI/Performance Management working and more accurate if your data is integrated and managed. None of these are pre-requisites for initial success though.
The book is very long. What did you leave out? Well, I think it is a perfect length! What we left out were detailed how-tos on the technology and a formal methodology/project plans for individual activities. The book pulls together various themes and technologies and shows how they work together but it does not replace the kind of detail you would get in a book on business rules or analytics nor does it replace the need for analytic and systems development methods be they agile or Unified Process or CRISP-DM.
Tuesday, June 26, 2007
Free Data as in Free Beer
I found myself wandering the aisles at the American Library Association national conference over the weekend. Plenty of publishers, library management systems and book shelf builders, none of which are particularly relevant to this blog (although there was at least one “loyalty” system for library patrons). There was some search technology but nothing particularly noteworthy.
The only exhibitor that did catch my eye was Data-Planet, which aggregates data on many topics (think census, economic time series, stocks, weather, etc.) and makes it accessible over the Web through a convenient point-and-click interface. The demo system was incredibly fast for Web access, although I don’t know whether the show set-up was typical. The underlying database is nothing special (SQL Server), but apparently the tables have been formatted for quick and easy access.
None of this would have really impressed me until I heard the price: $495 per user per year. (Also available: a 30 day free trial and $49.95 month-to-month subscription). Let me make clear that we’re talking about LOTS of data: “hundreds of public and price industry sources” as the company brochure puts it. Knowing how much people often pay for much smaller data sets, this strikes me as one of those bargains that are too good to pass up even if you don’t know what you’ll do with it.
As I was pondering this, I recalled a post by Adelino de Almeida about some free data aggregation sites, Swivel and Data360 . This made me a bit sad: I was pretty enthused about Data-Planet but don’t see how they can survive when others are giving away similar data for free. I’ve only played briefly with Swivel and Data360 but suspect they aren’t quite as powerful as Data-Planet, so perhaps there is room for both free and paid services.
Incidentally, Adelino has been posting recently about lifetime value. He takes a different approach to the topic than I do.
The only exhibitor that did catch my eye was Data-Planet, which aggregates data on many topics (think census, economic time series, stocks, weather, etc.) and makes it accessible over the Web through a convenient point-and-click interface. The demo system was incredibly fast for Web access, although I don’t know whether the show set-up was typical. The underlying database is nothing special (SQL Server), but apparently the tables have been formatted for quick and easy access.
None of this would have really impressed me until I heard the price: $495 per user per year. (Also available: a 30 day free trial and $49.95 month-to-month subscription). Let me make clear that we’re talking about LOTS of data: “hundreds of public and price industry sources” as the company brochure puts it. Knowing how much people often pay for much smaller data sets, this strikes me as one of those bargains that are too good to pass up even if you don’t know what you’ll do with it.
As I was pondering this, I recalled a post by Adelino de Almeida about some free data aggregation sites, Swivel and Data360 . This made me a bit sad: I was pretty enthused about Data-Planet but don’t see how they can survive when others are giving away similar data for free. I’ve only played briefly with Swivel and Data360 but suspect they aren’t quite as powerful as Data-Planet, so perhaps there is room for both free and paid services.
Incidentally, Adelino has been posting recently about lifetime value. He takes a different approach to the topic than I do.
Wednesday, June 20, 2007
Using Lifetime Value to Measure the Value of Data Quality
As readers of this blog are aware, I’ve reluctantly backed away from arguing that lifetime value should be the central metric for business management. I still think it should, but haven’t found managers ready to agree.
But even if LTV isn’t the primary metric, it can still provide a powerful analytical tool. Consider, for example, data quality. One of the challenges facing a data quality initiative is how to justify the expense. Lifetime value provides a framework for doing just that.
The method is pretty straightforward: break lifetime value in its components and quantify the impact of a proposed change on whichever components will be affected. Roll this up to business value, and there you have it.
Specifically, such a breakdown would look like this:
Business value = sum of future cash flows = number of customers x lifetime value per customer
Number of customers would be further broken down into segments, with the number of customers in each segment. Many companies have a standard segmentation scheme that would apply to all analyses of this sort. Others would create custom segmentations depending on the nature of the project. Where a specific initiative such as data quality is concerned, it would make sense to isolate the customer segments affected by the initiative and just focus on them. (This may seem self-evident, but it’s easy for people to ignore the fact that only some customers will be affected, and apply estimated benefits to everybody. This gives nice big numbers but is often quite unrealistic.)
Lifetime value per customer can be calculated many ways, but a pretty common approach is to break it into three major factors:
- acquisition value, further divided into the marketing cost of acquiring a new customer, the revenue from that initial purchase, and the fulfillment costs (product, service, etc.) related to that purchase. All these values are calculated separately for each customer segment.
- future value, which is the number of active years per customers times the value per year. Years per customer can be derived from a retention rate or a more advanced approach such as a survivor curve (showing number of customers remaining at the end of each year). Value per year can be broken into the number of orders per year times the value per order , or the average mix of products times the value per product). Value per order or product can itself be broken into revenue, marketing cost and fulfillment cost.
Laid out more formally, this comes to nine key factors:
- number of customers
- acquisition marketing cost per customer
- acquisition revenue per customer
- acquisition fulfillment cost per customer
- number of years per customer
- orders per year
- revenue per order
- marketing cost per order
- fulfillment cost per order
This approach may seem a little too customer-centric: after all, many data quality initiatives relate to things like manufacturing and internal business processes (e.g., payroll processing). Well, as my grandmother would have said, feh! (Rhymes with ‘heh’, in case you’re wondering, and signifies disdain.) First of all, you can never be too customer-centric, and shame on you for even thinking otherwise. Second of all, if you need it: every business process ultimately affects a customer, even if all it does is impact overhead costs (which affect prices and profit margins). Such items are embedded in the revenue and fulfillment cost figures above.
I could easily list examples of data quality changes that would affect each of the nine factors, but, like the margin of Fermat’s book, this blog post is too small to contain them. What I will say is that many benefits come from being able to do more precise segmentation, which will impact revenue, marketing costs, and numbers of customers, years, and orders per customer. Other benefits, impacting primarily fulfillment costs (using my broad definition), will involve more efficient back-office processes such as manufacturing, service and administration.
One additional point worth noting is many of the benefits will be discontinuous. That is, data that's currently useless because of poor quality or total absence does not become slightly useful because it becomes slightly better or partially available. A major change like targeted offers based on demographics can only be justified if accurate demographic data is available for a large portion of the customer base. The value of the data therefore remains at zero until a sufficient volume is obtained: then, it suddenly jumps to something significant. Of course, there are other cases, such as avoidance of rework or duplicate mailings, where each incremental improvement in quality does bring a small but immediate reduction in cost.
Once the business value of a particular data quality effort has been calculated, it’s easy to prepare a traditional return on investment calculation. All you need to add is the cost of improvement itself.
Naturally, the real challenge here is estimating the impact of a particular improvement. There’s no shortcut to make this easy: you simply have to work through the specifics of each case. But having a standard set of factors makes it easier to identify the possible benefits and to compare alternative projects. Perhaps more important, the framework makes it easy to show how improvements will affect conventional financial measurements. These will often make sense to managers who are unfamiliar with the details of the data and processes involved. Finally, the framework and related financial measurements provide benchmarks that can later be compared with actual results to show whether the expected benefits were realized. Although such accountability can be somewhat frightening, proof of success will ultimately build credibility. This, in turn, will help future projects gain easier approval.
But even if LTV isn’t the primary metric, it can still provide a powerful analytical tool. Consider, for example, data quality. One of the challenges facing a data quality initiative is how to justify the expense. Lifetime value provides a framework for doing just that.
The method is pretty straightforward: break lifetime value in its components and quantify the impact of a proposed change on whichever components will be affected. Roll this up to business value, and there you have it.
Specifically, such a breakdown would look like this:
Business value = sum of future cash flows = number of customers x lifetime value per customer
Number of customers would be further broken down into segments, with the number of customers in each segment. Many companies have a standard segmentation scheme that would apply to all analyses of this sort. Others would create custom segmentations depending on the nature of the project. Where a specific initiative such as data quality is concerned, it would make sense to isolate the customer segments affected by the initiative and just focus on them. (This may seem self-evident, but it’s easy for people to ignore the fact that only some customers will be affected, and apply estimated benefits to everybody. This gives nice big numbers but is often quite unrealistic.)
Lifetime value per customer can be calculated many ways, but a pretty common approach is to break it into three major factors:
- acquisition value, further divided into the marketing cost of acquiring a new customer, the revenue from that initial purchase, and the fulfillment costs (product, service, etc.) related to that purchase. All these values are calculated separately for each customer segment.
- future value, which is the number of active years per customers times the value per year. Years per customer can be derived from a retention rate or a more advanced approach such as a survivor curve (showing number of customers remaining at the end of each year). Value per year can be broken into the number of orders per year times the value per order , or the average mix of products times the value per product). Value per order or product can itself be broken into revenue, marketing cost and fulfillment cost.
Laid out more formally, this comes to nine key factors:
- number of customers
- acquisition marketing cost per customer
- acquisition revenue per customer
- acquisition fulfillment cost per customer
- number of years per customer
- orders per year
- revenue per order
- marketing cost per order
- fulfillment cost per order
This approach may seem a little too customer-centric: after all, many data quality initiatives relate to things like manufacturing and internal business processes (e.g., payroll processing). Well, as my grandmother would have said, feh! (Rhymes with ‘heh’, in case you’re wondering, and signifies disdain.) First of all, you can never be too customer-centric, and shame on you for even thinking otherwise. Second of all, if you need it: every business process ultimately affects a customer, even if all it does is impact overhead costs (which affect prices and profit margins). Such items are embedded in the revenue and fulfillment cost figures above.
I could easily list examples of data quality changes that would affect each of the nine factors, but, like the margin of Fermat’s book, this blog post is too small to contain them. What I will say is that many benefits come from being able to do more precise segmentation, which will impact revenue, marketing costs, and numbers of customers, years, and orders per customer. Other benefits, impacting primarily fulfillment costs (using my broad definition), will involve more efficient back-office processes such as manufacturing, service and administration.
One additional point worth noting is many of the benefits will be discontinuous. That is, data that's currently useless because of poor quality or total absence does not become slightly useful because it becomes slightly better or partially available. A major change like targeted offers based on demographics can only be justified if accurate demographic data is available for a large portion of the customer base. The value of the data therefore remains at zero until a sufficient volume is obtained: then, it suddenly jumps to something significant. Of course, there are other cases, such as avoidance of rework or duplicate mailings, where each incremental improvement in quality does bring a small but immediate reduction in cost.
Once the business value of a particular data quality effort has been calculated, it’s easy to prepare a traditional return on investment calculation. All you need to add is the cost of improvement itself.
Naturally, the real challenge here is estimating the impact of a particular improvement. There’s no shortcut to make this easy: you simply have to work through the specifics of each case. But having a standard set of factors makes it easier to identify the possible benefits and to compare alternative projects. Perhaps more important, the framework makes it easy to show how improvements will affect conventional financial measurements. These will often make sense to managers who are unfamiliar with the details of the data and processes involved. Finally, the framework and related financial measurements provide benchmarks that can later be compared with actual results to show whether the expected benefits were realized. Although such accountability can be somewhat frightening, proof of success will ultimately build credibility. This, in turn, will help future projects gain easier approval.
Labels:
customer metrics,
data quality,
lifetime value model
Tuesday, June 19, 2007
Unica Paper Gives Marketing Measurement Tips
If the wisdom of Plato can’t solve our marketing measurement problems, perhaps we can look to industry veteran Fred Chapman, currently with enterprise marketing software developer Unica. Fred recently gave a Webinar on Marketing Effectively on Your Terms and Your Time which did an excellent job laying out issues and solutions for today’s marketers. Follow-up materials included a white paper Building a Performance Measurement Culture in Marketing laying out ten steps toward improved marketing measurement.
The advice in the paper is reasonable, if fairly conventional: ensure sponsorship, articulate goals, identify important metrics, and so on. The paper also stresses the importance of having an underlying enterprise marketing system like, say, the one sold by Unica.
This is useful, so far as it goes. But it doesn't help with the real challenge of measurement projects, which is choosing metrics that support corporate strategies. So far, I haven’t come across a specific methodology for doing this. Most gurus seem to assume a flash of enlightenment will show each organization its own path. Perhaps organizations and strategies are all too different for any methodology to be more specific.
Relying on such insights suggests we have veered from Western rationalism to Eastern mysticism. I haven't yet seen a book "The Zen of Marketing Performance Measurement", but perhaps that's where we're headed.
The advice in the paper is reasonable, if fairly conventional: ensure sponsorship, articulate goals, identify important metrics, and so on. The paper also stresses the importance of having an underlying enterprise marketing system like, say, the one sold by Unica.
This is useful, so far as it goes. But it doesn't help with the real challenge of measurement projects, which is choosing metrics that support corporate strategies. So far, I haven’t come across a specific methodology for doing this. Most gurus seem to assume a flash of enlightenment will show each organization its own path. Perhaps organizations and strategies are all too different for any methodology to be more specific.
Relying on such insights suggests we have veered from Western rationalism to Eastern mysticism. I haven't yet seen a book "The Zen of Marketing Performance Measurement", but perhaps that's where we're headed.
Monday, June 18, 2007
Plato's View of Marketing Performance Measurement
I reread Plato’s Protagoras over the weekend for a change of pace. What makes that relevant here is Socrates’ contention that virtue is the ability to measure accurately—in particular, the ability to measure the amount of good or evil produced by an activity. Socrates’ logic is that people always seek the greatest amount of good (which he equates with pleasure), so different choices simply result from different judgments about which action will produce the most good.
I don’t find this argument terribly convincing, for reasons I’ll get to shortly. But it certainly resembles the case I’ve made here about the importance of measuring lifetime value as a way to make good business decisions. So, to a certain degree, I share Socrates' apparent frustration that so many people fail to accept the logic of this position—that they should devote themselves to learning to measure the consequences of their decisions.
Of course, the flaw in both Plato’s and my own vision is that people are not purely rational. I’ll leave the philosophical consequences to others, but the implication for business management is you can’t expect people to make decisions solely on the basis of lifetime value: they have too many other, non-rational factors to take into consideration.
It was none other than Protagoras who said “Man is the measure of all things”—and I think it’s fair to assume he would be unlikely to accept the Platonic ideal of marketing measurement, which makes lifetime value the measure of all things instead.
I don’t find this argument terribly convincing, for reasons I’ll get to shortly. But it certainly resembles the case I’ve made here about the importance of measuring lifetime value as a way to make good business decisions. So, to a certain degree, I share Socrates' apparent frustration that so many people fail to accept the logic of this position—that they should devote themselves to learning to measure the consequences of their decisions.
Of course, the flaw in both Plato’s and my own vision is that people are not purely rational. I’ll leave the philosophical consequences to others, but the implication for business management is you can’t expect people to make decisions solely on the basis of lifetime value: they have too many other, non-rational factors to take into consideration.
It was none other than Protagoras who said “Man is the measure of all things”—and I think it’s fair to assume he would be unlikely to accept the Platonic ideal of marketing measurement, which makes lifetime value the measure of all things instead.
Friday, June 15, 2007
Accenture Paper Offers Simplified CRM Planning Approach
As I’ve pointed out many times before, consultants love their 2x2 matrices. Our friends at Accenture have once again illustrated the point with a paper “Surveying and Building Your CRM Future,” whose subtitle promises “a New CRM Software Decision-Making Model”.
Yep, the model is a matrix, dividing users into four categories based on data “density” (volume and update frequency) and business process uniqueness (need for customization). Each combination neatly maps to a different class of CRM software. Specifically:
- High density / low uniqueness is suited to enterprise packages like SAP and Oracle, since there’s a lot of highly integrated data but not too much customization required
- Low density / low uniqueness is suited to Software as a Service (SaaS) products like Salesforce.com since data and customization needs are minimal
- High density / high uniqueness is suited to “composite CRM” suites like Siebel (it’s not clear whether Accenture thinks any other products exist in this group)
- Low density / high uniqueness is suited to specialized “niche” vendors like marketing automation, pricing or analytics systems
In general these are reasonable dimensions, reasonable software classifications and a reasonable mapping of software to user needs. (Of course, some vendors might disagree.) Boundaries in the real world are not quite so distinct, but let's assume that Accenture has knowingly oversimplified for presentation purposes.
A couple of things still bother me. One is the notion that there’s something new here—the paper argues the “old” decision making model was simply based on comparing functions to business requirements, as if this were no longer necessary. Although it’s true that there is something like functional parity in the enterprise and, perhaps, “composite CRM" categories, there are still many significant differences among the SaaS and niche products. More important, business requirements different greatly among companies, and are far from encapsulated by two simple dimensions.
A cynic would point out that companies like Accenture pick one or two tools in each category and have no interest in considering alternatives that might be better suited for a particular client. But am I a cynic?
My other objection is that even though the paper mentions Service Oriented Architectures (SOA) several times, it doesn’t really come to grips with the implications. It relegates SOA to the high density / high latency quadrant: “Essentially, a composite CRM solution is a solution that enables organizations to move toward SOAs.” Then it argues that enterprise packages themselves are migrating in the composite CRM direction. This is rather confusing but seems to imply the two categories will merge.
I think what’s missing here is an acknowledgement that real companies will always have a mix of systems. No firm runs purely on SAP or Oracle enteprise software. Large firms have multiple CRM implementations. Thus there will always be a need to integrate different solutions, regardless of where a company falls on the density and uniqueness dimensions. SOA offers great promise as a way to accomplish this integration. This means it is as likely to break apart the enterprise packages as to become the glue that holds them together.
In short, this paper presents some potentially helpful insights. But there’s still no shortcut around the real work of requirements analysis, vendor evaluation and business planning.
Yep, the model is a matrix, dividing users into four categories based on data “density” (volume and update frequency) and business process uniqueness (need for customization). Each combination neatly maps to a different class of CRM software. Specifically:
- High density / low uniqueness is suited to enterprise packages like SAP and Oracle, since there’s a lot of highly integrated data but not too much customization required
- Low density / low uniqueness is suited to Software as a Service (SaaS) products like Salesforce.com since data and customization needs are minimal
- High density / high uniqueness is suited to “composite CRM” suites like Siebel (it’s not clear whether Accenture thinks any other products exist in this group)
- Low density / high uniqueness is suited to specialized “niche” vendors like marketing automation, pricing or analytics systems
In general these are reasonable dimensions, reasonable software classifications and a reasonable mapping of software to user needs. (Of course, some vendors might disagree.) Boundaries in the real world are not quite so distinct, but let's assume that Accenture has knowingly oversimplified for presentation purposes.
A couple of things still bother me. One is the notion that there’s something new here—the paper argues the “old” decision making model was simply based on comparing functions to business requirements, as if this were no longer necessary. Although it’s true that there is something like functional parity in the enterprise and, perhaps, “composite CRM" categories, there are still many significant differences among the SaaS and niche products. More important, business requirements different greatly among companies, and are far from encapsulated by two simple dimensions.
A cynic would point out that companies like Accenture pick one or two tools in each category and have no interest in considering alternatives that might be better suited for a particular client. But am I a cynic?
My other objection is that even though the paper mentions Service Oriented Architectures (SOA) several times, it doesn’t really come to grips with the implications. It relegates SOA to the high density / high latency quadrant: “Essentially, a composite CRM solution is a solution that enables organizations to move toward SOAs.” Then it argues that enterprise packages themselves are migrating in the composite CRM direction. This is rather confusing but seems to imply the two categories will merge.
I think what’s missing here is an acknowledgement that real companies will always have a mix of systems. No firm runs purely on SAP or Oracle enteprise software. Large firms have multiple CRM implementations. Thus there will always be a need to integrate different solutions, regardless of where a company falls on the density and uniqueness dimensions. SOA offers great promise as a way to accomplish this integration. This means it is as likely to break apart the enterprise packages as to become the glue that holds them together.
In short, this paper presents some potentially helpful insights. But there’s still no shortcut around the real work of requirements analysis, vendor evaluation and business planning.
Thursday, June 14, 2007
Hosted Software Enters the Down Side of the Hype Cycle
“SMB SaaS sales robust, but holdouts remain” reads the headline on a piece from SearchSMB.com Website. (For the acronym impaired, SMB is “small and medium sized business” and SaaS is “software as a service”, a.k.a. hosted systems.) The article quotes two recent surveys, one by Saugatuck Technology and the other by Gartner. According to the article, Saugatuck found “SMB adoption rose from 9% in 2006 to 27% in 2007” among businesses under $1 billion in revenue, while Gartner reported “Only 7% of SMBs strongly believed that SaaS was suitable for their organizations, and only 17% said they would consider SaaS when its adoption became more widespread.”
These seem to be conflicting findings, although it’s impossible to know for certain without looking at the actual surveys and their audiences. But the very appearance of the piece suggests some of the bloom is off the SaaS rose. This is a normal stage in the hype cycle and frankly I’ve been anticipating it for some time. The more interesting question is why SMBs would be reluctant to adopt SaaS.
The article quotes Gartner Vice President and Research Director James Browning as blaming the fact that “SMBs are control freaks” and therefore less willing to trust their data to an outsider than larger, presumably more sophisticated entities. Maybe—although I’ve seen plenty of control freaks at big companies too. The article also mentions difficulties with customization and integration. Again, I suspect that’s a contributing factor but probably not the main one.
A more convincing insight came from an actual SMB manager, who pointed to quality of service issues and higher costs than in-house systems. I personally suspect the cost issue is the real one: whether or not they’re control freaks, SMBs are definitely penny-pinchers. That’s what happens when it’s your own money. (I say this as someone who’s run my own Very Small Business for many years.) On a more detailed financial level, SMBs have less formal capital appropriation processes than big companies, so their managers have less incentive to avoid the capital expense by purchasing SaaS products through their operating budgets.
One point the article doesn’t mention is that SaaS prices have gone up considerably, at least among the major vendors. This shifts the economics in favor of in-house systems, particularly since many SMBs can use low cost products that larger companies would not accept. This pricing shift makes sense from the vendors’ standpoint: as SaaS is accepted at larger companies with deeper pockets, it makes sense to raise prices to match. Small businesses may need to look beyond the market leaders to find pricing they can afford.
These seem to be conflicting findings, although it’s impossible to know for certain without looking at the actual surveys and their audiences. But the very appearance of the piece suggests some of the bloom is off the SaaS rose. This is a normal stage in the hype cycle and frankly I’ve been anticipating it for some time. The more interesting question is why SMBs would be reluctant to adopt SaaS.
The article quotes Gartner Vice President and Research Director James Browning as blaming the fact that “SMBs are control freaks” and therefore less willing to trust their data to an outsider than larger, presumably more sophisticated entities. Maybe—although I’ve seen plenty of control freaks at big companies too. The article also mentions difficulties with customization and integration. Again, I suspect that’s a contributing factor but probably not the main one.
A more convincing insight came from an actual SMB manager, who pointed to quality of service issues and higher costs than in-house systems. I personally suspect the cost issue is the real one: whether or not they’re control freaks, SMBs are definitely penny-pinchers. That’s what happens when it’s your own money. (I say this as someone who’s run my own Very Small Business for many years.) On a more detailed financial level, SMBs have less formal capital appropriation processes than big companies, so their managers have less incentive to avoid the capital expense by purchasing SaaS products through their operating budgets.
One point the article doesn’t mention is that SaaS prices have gone up considerably, at least among the major vendors. This shifts the economics in favor of in-house systems, particularly since many SMBs can use low cost products that larger companies would not accept. This pricing shift makes sense from the vendors’ standpoint: as SaaS is accepted at larger companies with deeper pockets, it makes sense to raise prices to match. Small businesses may need to look beyond the market leaders to find pricing they can afford.
Wednesday, June 13, 2007
Autonomy Ultraseek Argues There's More to Search Than You-Know-Who
In case I didn’t make myself clear yesterday, my conclusion about balanced scorecard software is that the systems themselves are not very interesting, even though the concept itself can be extremely valuable. There’s nothing wrong with that: payroll software also isn’t very interesting, but people care deeply that it works correctly. In the case of balanced scorecards, you just need something to display the data—fancy dashboard-style interfaces are possible but not really the point. Nor is there much mystery about the underlying technology. All the value and all the art lie elsewhere: in picking the right measures and making sure managers pay attention to what the scorecards are telling them.
I only bring this up to explain why I won’t be writing much about balanced scorecard systems. In a word (and with all due respect, and stressing again that the application is important), I find them boring.
Contrast this with text search systems. These, I find fascinating. The technology is delightfully complicated and subtle differences among systems can have big implications for how well they serve particular purposes. Plus, as I mentioned a little while ago, there is some interesting convergence going on between search technology and data integration systems.
One challenge facing search vendors today is the dominance of Google. I hadn’t really given this much thought, but reading the white paper “Business Search vs. Consumer Search” (registration required) from Autonomy’s Ultraseek product group, it became clear that they are finding Google to be major competition. The paper doesn’t mention Google by name, but everything from the title on down is focused on explaining why there are “fundamental differences between searching for information on the Internet and finding the right document quickly inside your corporate intranets, public websites and partner extranets.”
The paper states Ultraseek’s case well. It mentions five specific differences between “consumer” search on the Web and business search:
- business users have different, known roles which can be used to tune results
- business users can employ category drill-down, metadata, and other alternatives to keyword searches
- business searches must span multiple repositories, not just Web pages
- business repositories are in many different formats and languages
- business searches are constrained by security and different user authorities
Ultraseek overstates its case in a few areas. Consumer search can use more than just keywords, and in fact can employ quite a few of the text analysis methods that Ultraseek mentions as business-specific. Consumer search is also working on moving beyond Web pages to different repositories, formats and languages. But known user roles and security issues are certainly more relevant to business than consumer search engines. And, although Ultraseek doesn’t mention it, Web search engines don't generally support some other features, like letting content owners tweak results to highlight particular items, that may matter in a business context.
But, over all, the point is well taken: there really is a lot more to search than Google. People need to take the time to find the right tool for the job at hand.
I only bring this up to explain why I won’t be writing much about balanced scorecard systems. In a word (and with all due respect, and stressing again that the application is important), I find them boring.
Contrast this with text search systems. These, I find fascinating. The technology is delightfully complicated and subtle differences among systems can have big implications for how well they serve particular purposes. Plus, as I mentioned a little while ago, there is some interesting convergence going on between search technology and data integration systems.
One challenge facing search vendors today is the dominance of Google. I hadn’t really given this much thought, but reading the white paper “Business Search vs. Consumer Search” (registration required) from Autonomy’s Ultraseek product group, it became clear that they are finding Google to be major competition. The paper doesn’t mention Google by name, but everything from the title on down is focused on explaining why there are “fundamental differences between searching for information on the Internet and finding the right document quickly inside your corporate intranets, public websites and partner extranets.”
The paper states Ultraseek’s case well. It mentions five specific differences between “consumer” search on the Web and business search:
- business users have different, known roles which can be used to tune results
- business users can employ category drill-down, metadata, and other alternatives to keyword searches
- business searches must span multiple repositories, not just Web pages
- business repositories are in many different formats and languages
- business searches are constrained by security and different user authorities
Ultraseek overstates its case in a few areas. Consumer search can use more than just keywords, and in fact can employ quite a few of the text analysis methods that Ultraseek mentions as business-specific. Consumer search is also working on moving beyond Web pages to different repositories, formats and languages. But known user roles and security issues are certainly more relevant to business than consumer search engines. And, although Ultraseek doesn’t mention it, Web search engines don't generally support some other features, like letting content owners tweak results to highlight particular items, that may matter in a business context.
But, over all, the point is well taken: there really is a lot more to search than Google. People need to take the time to find the right tool for the job at hand.
Labels:
search engines,
text analysis
Subscribe to:
Posts (Atom)