A few months ago, James Taylor of Fair Isaac asked me to look over a proof of Smart (Enough) Systems, a book he has co-written with industry guru Neil Raden of Hired Brains. The topic, of course, is enterprise decision management, which the book explains in great detail. It has now been released (you can order through Amazon or James or Neil), so I asked James for a few comments to share.
What did you hope to accomplish with this book? Fame and fortune. Seriously, what I wanted to do was bring a whole bunch of threads and thoughts together in one place with enough space to develop ideas more fully. I have been writing about this topic a lot for several years and seen lots of great examples. The trouble is that a blog (www.edmblog.com) and articles only give you so much room – you tend to skim each topic. A book really let me and Neil delve deeper into the whys and hows of the topic. Hopefully the book will let people see how unnecessarily stupid their systems are and how a focus on the decisions within those systems can make them more useful.
What are the biggest obstacles to EDM and how can people overcome them?
- One is the belief that they need to develop “smart” systems and that this requires to-be-developed technology from the minds of researchers and science-fiction writers. Nothing could be further from the truth – the technology and approach to make systems be smart enough are well established and proven.
- Another is the failure to focus on decisions as critical aspects of their systems. Historically many decisions were taken manually or were not noticed at all. For instance, a call center manager might be put on the line to approve a fee refund for a good customer when the decision could have been taken by the system the call center representative was using without the need for a referral. That’s a unnecessarily manual decision. A hidden decision might be something like the options on an IVR system. Most companies make them the same for everyone yet once you know who is calling you could decide to give them a personalized set of options. Most companies don’t even notice this kind of decision and so take it poorly.
- Many companies have a hard time with “trusting” software and so like to have people make decisions. Yet the evidence is that the judicious use of automation for decisions can free up people to make the kinds of decisions they are really good at and let machines take the rest.
- Companies have become convinced that business intelligence means BI software and so they don’t think about using that data to make predictions of the future or the use of those predictions to improve production systems. This is changing slowly as people realize how little value they are getting out of looking backwards with their data instead of looking forwards.
Can EDM be deployed piecemeal (individual decisions) or does it need some overarching framework to understand each decision's long-term impact?
It can and should be deployed piecemeal. Like any approach it becomes easier once a framework is in place and part of an organizations standard methodology but local success with the automation and management of an individual decision is both possible and recommended for getting started.
The more of the basic building blocks of a modern enterprise architecture you have the better. Automated decisions are easier to embed if you are adopting SOA/BPM, easier to monitor if you have BI/Performance Management working and more accurate if your data is integrated and managed. None of these are pre-requisites for initial success though.
The book is very long. What did you leave out? Well, I think it is a perfect length! What we left out were detailed how-tos on the technology and a formal methodology/project plans for individual activities. The book pulls together various themes and technologies and shows how they work together but it does not replace the kind of detail you would get in a book on business rules or analytics nor does it replace the need for analytic and systems development methods be they agile or Unified Process or CRISP-DM.
This is the blog of David M. Raab, marketing technology consultant and analyst. Mr. Raab is founder and CEO of the Customer Data Platform Institute and Principal at Raab Associates Inc. All opinions here are his own. The blog is named for the Customer Experience Matrix, a tool to visualize marketing and operational interactions between a company and its customers.
Friday, June 29, 2007
Tuesday, June 26, 2007
Free Data as in Free Beer
I found myself wandering the aisles at the American Library Association national conference over the weekend. Plenty of publishers, library management systems and book shelf builders, none of which are particularly relevant to this blog (although there was at least one “loyalty” system for library patrons). There was some search technology but nothing particularly noteworthy.
The only exhibitor that did catch my eye was Data-Planet, which aggregates data on many topics (think census, economic time series, stocks, weather, etc.) and makes it accessible over the Web through a convenient point-and-click interface. The demo system was incredibly fast for Web access, although I don’t know whether the show set-up was typical. The underlying database is nothing special (SQL Server), but apparently the tables have been formatted for quick and easy access.
None of this would have really impressed me until I heard the price: $495 per user per year. (Also available: a 30 day free trial and $49.95 month-to-month subscription). Let me make clear that we’re talking about LOTS of data: “hundreds of public and price industry sources” as the company brochure puts it. Knowing how much people often pay for much smaller data sets, this strikes me as one of those bargains that are too good to pass up even if you don’t know what you’ll do with it.
As I was pondering this, I recalled a post by Adelino de Almeida about some free data aggregation sites, Swivel and Data360 . This made me a bit sad: I was pretty enthused about Data-Planet but don’t see how they can survive when others are giving away similar data for free. I’ve only played briefly with Swivel and Data360 but suspect they aren’t quite as powerful as Data-Planet, so perhaps there is room for both free and paid services.
Incidentally, Adelino has been posting recently about lifetime value. He takes a different approach to the topic than I do.
The only exhibitor that did catch my eye was Data-Planet, which aggregates data on many topics (think census, economic time series, stocks, weather, etc.) and makes it accessible over the Web through a convenient point-and-click interface. The demo system was incredibly fast for Web access, although I don’t know whether the show set-up was typical. The underlying database is nothing special (SQL Server), but apparently the tables have been formatted for quick and easy access.
None of this would have really impressed me until I heard the price: $495 per user per year. (Also available: a 30 day free trial and $49.95 month-to-month subscription). Let me make clear that we’re talking about LOTS of data: “hundreds of public and price industry sources” as the company brochure puts it. Knowing how much people often pay for much smaller data sets, this strikes me as one of those bargains that are too good to pass up even if you don’t know what you’ll do with it.
As I was pondering this, I recalled a post by Adelino de Almeida about some free data aggregation sites, Swivel and Data360 . This made me a bit sad: I was pretty enthused about Data-Planet but don’t see how they can survive when others are giving away similar data for free. I’ve only played briefly with Swivel and Data360 but suspect they aren’t quite as powerful as Data-Planet, so perhaps there is room for both free and paid services.
Incidentally, Adelino has been posting recently about lifetime value. He takes a different approach to the topic than I do.
Wednesday, June 20, 2007
Using Lifetime Value to Measure the Value of Data Quality
As readers of this blog are aware, I’ve reluctantly backed away from arguing that lifetime value should be the central metric for business management. I still think it should, but haven’t found managers ready to agree.
But even if LTV isn’t the primary metric, it can still provide a powerful analytical tool. Consider, for example, data quality. One of the challenges facing a data quality initiative is how to justify the expense. Lifetime value provides a framework for doing just that.
The method is pretty straightforward: break lifetime value in its components and quantify the impact of a proposed change on whichever components will be affected. Roll this up to business value, and there you have it.
Specifically, such a breakdown would look like this:
Business value = sum of future cash flows = number of customers x lifetime value per customer
Number of customers would be further broken down into segments, with the number of customers in each segment. Many companies have a standard segmentation scheme that would apply to all analyses of this sort. Others would create custom segmentations depending on the nature of the project. Where a specific initiative such as data quality is concerned, it would make sense to isolate the customer segments affected by the initiative and just focus on them. (This may seem self-evident, but it’s easy for people to ignore the fact that only some customers will be affected, and apply estimated benefits to everybody. This gives nice big numbers but is often quite unrealistic.)
Lifetime value per customer can be calculated many ways, but a pretty common approach is to break it into three major factors:
- acquisition value, further divided into the marketing cost of acquiring a new customer, the revenue from that initial purchase, and the fulfillment costs (product, service, etc.) related to that purchase. All these values are calculated separately for each customer segment.
- future value, which is the number of active years per customers times the value per year. Years per customer can be derived from a retention rate or a more advanced approach such as a survivor curve (showing number of customers remaining at the end of each year). Value per year can be broken into the number of orders per year times the value per order , or the average mix of products times the value per product). Value per order or product can itself be broken into revenue, marketing cost and fulfillment cost.
Laid out more formally, this comes to nine key factors:
- number of customers
- acquisition marketing cost per customer
- acquisition revenue per customer
- acquisition fulfillment cost per customer
- number of years per customer
- orders per year
- revenue per order
- marketing cost per order
- fulfillment cost per order
This approach may seem a little too customer-centric: after all, many data quality initiatives relate to things like manufacturing and internal business processes (e.g., payroll processing). Well, as my grandmother would have said, feh! (Rhymes with ‘heh’, in case you’re wondering, and signifies disdain.) First of all, you can never be too customer-centric, and shame on you for even thinking otherwise. Second of all, if you need it: every business process ultimately affects a customer, even if all it does is impact overhead costs (which affect prices and profit margins). Such items are embedded in the revenue and fulfillment cost figures above.
I could easily list examples of data quality changes that would affect each of the nine factors, but, like the margin of Fermat’s book, this blog post is too small to contain them. What I will say is that many benefits come from being able to do more precise segmentation, which will impact revenue, marketing costs, and numbers of customers, years, and orders per customer. Other benefits, impacting primarily fulfillment costs (using my broad definition), will involve more efficient back-office processes such as manufacturing, service and administration.
One additional point worth noting is many of the benefits will be discontinuous. That is, data that's currently useless because of poor quality or total absence does not become slightly useful because it becomes slightly better or partially available. A major change like targeted offers based on demographics can only be justified if accurate demographic data is available for a large portion of the customer base. The value of the data therefore remains at zero until a sufficient volume is obtained: then, it suddenly jumps to something significant. Of course, there are other cases, such as avoidance of rework or duplicate mailings, where each incremental improvement in quality does bring a small but immediate reduction in cost.
Once the business value of a particular data quality effort has been calculated, it’s easy to prepare a traditional return on investment calculation. All you need to add is the cost of improvement itself.
Naturally, the real challenge here is estimating the impact of a particular improvement. There’s no shortcut to make this easy: you simply have to work through the specifics of each case. But having a standard set of factors makes it easier to identify the possible benefits and to compare alternative projects. Perhaps more important, the framework makes it easy to show how improvements will affect conventional financial measurements. These will often make sense to managers who are unfamiliar with the details of the data and processes involved. Finally, the framework and related financial measurements provide benchmarks that can later be compared with actual results to show whether the expected benefits were realized. Although such accountability can be somewhat frightening, proof of success will ultimately build credibility. This, in turn, will help future projects gain easier approval.
But even if LTV isn’t the primary metric, it can still provide a powerful analytical tool. Consider, for example, data quality. One of the challenges facing a data quality initiative is how to justify the expense. Lifetime value provides a framework for doing just that.
The method is pretty straightforward: break lifetime value in its components and quantify the impact of a proposed change on whichever components will be affected. Roll this up to business value, and there you have it.
Specifically, such a breakdown would look like this:
Business value = sum of future cash flows = number of customers x lifetime value per customer
Number of customers would be further broken down into segments, with the number of customers in each segment. Many companies have a standard segmentation scheme that would apply to all analyses of this sort. Others would create custom segmentations depending on the nature of the project. Where a specific initiative such as data quality is concerned, it would make sense to isolate the customer segments affected by the initiative and just focus on them. (This may seem self-evident, but it’s easy for people to ignore the fact that only some customers will be affected, and apply estimated benefits to everybody. This gives nice big numbers but is often quite unrealistic.)
Lifetime value per customer can be calculated many ways, but a pretty common approach is to break it into three major factors:
- acquisition value, further divided into the marketing cost of acquiring a new customer, the revenue from that initial purchase, and the fulfillment costs (product, service, etc.) related to that purchase. All these values are calculated separately for each customer segment.
- future value, which is the number of active years per customers times the value per year. Years per customer can be derived from a retention rate or a more advanced approach such as a survivor curve (showing number of customers remaining at the end of each year). Value per year can be broken into the number of orders per year times the value per order , or the average mix of products times the value per product). Value per order or product can itself be broken into revenue, marketing cost and fulfillment cost.
Laid out more formally, this comes to nine key factors:
- number of customers
- acquisition marketing cost per customer
- acquisition revenue per customer
- acquisition fulfillment cost per customer
- number of years per customer
- orders per year
- revenue per order
- marketing cost per order
- fulfillment cost per order
This approach may seem a little too customer-centric: after all, many data quality initiatives relate to things like manufacturing and internal business processes (e.g., payroll processing). Well, as my grandmother would have said, feh! (Rhymes with ‘heh’, in case you’re wondering, and signifies disdain.) First of all, you can never be too customer-centric, and shame on you for even thinking otherwise. Second of all, if you need it: every business process ultimately affects a customer, even if all it does is impact overhead costs (which affect prices and profit margins). Such items are embedded in the revenue and fulfillment cost figures above.
I could easily list examples of data quality changes that would affect each of the nine factors, but, like the margin of Fermat’s book, this blog post is too small to contain them. What I will say is that many benefits come from being able to do more precise segmentation, which will impact revenue, marketing costs, and numbers of customers, years, and orders per customer. Other benefits, impacting primarily fulfillment costs (using my broad definition), will involve more efficient back-office processes such as manufacturing, service and administration.
One additional point worth noting is many of the benefits will be discontinuous. That is, data that's currently useless because of poor quality or total absence does not become slightly useful because it becomes slightly better or partially available. A major change like targeted offers based on demographics can only be justified if accurate demographic data is available for a large portion of the customer base. The value of the data therefore remains at zero until a sufficient volume is obtained: then, it suddenly jumps to something significant. Of course, there are other cases, such as avoidance of rework or duplicate mailings, where each incremental improvement in quality does bring a small but immediate reduction in cost.
Once the business value of a particular data quality effort has been calculated, it’s easy to prepare a traditional return on investment calculation. All you need to add is the cost of improvement itself.
Naturally, the real challenge here is estimating the impact of a particular improvement. There’s no shortcut to make this easy: you simply have to work through the specifics of each case. But having a standard set of factors makes it easier to identify the possible benefits and to compare alternative projects. Perhaps more important, the framework makes it easy to show how improvements will affect conventional financial measurements. These will often make sense to managers who are unfamiliar with the details of the data and processes involved. Finally, the framework and related financial measurements provide benchmarks that can later be compared with actual results to show whether the expected benefits were realized. Although such accountability can be somewhat frightening, proof of success will ultimately build credibility. This, in turn, will help future projects gain easier approval.
Tuesday, June 19, 2007
Unica Paper Gives Marketing Measurement Tips
If the wisdom of Plato can’t solve our marketing measurement problems, perhaps we can look to industry veteran Fred Chapman, currently with enterprise marketing software developer Unica. Fred recently gave a Webinar on Marketing Effectively on Your Terms and Your Time which did an excellent job laying out issues and solutions for today’s marketers. Follow-up materials included a white paper Building a Performance Measurement Culture in Marketing laying out ten steps toward improved marketing measurement.
The advice in the paper is reasonable, if fairly conventional: ensure sponsorship, articulate goals, identify important metrics, and so on. The paper also stresses the importance of having an underlying enterprise marketing system like, say, the one sold by Unica.
This is useful, so far as it goes. But it doesn't help with the real challenge of measurement projects, which is choosing metrics that support corporate strategies. So far, I haven’t come across a specific methodology for doing this. Most gurus seem to assume a flash of enlightenment will show each organization its own path. Perhaps organizations and strategies are all too different for any methodology to be more specific.
Relying on such insights suggests we have veered from Western rationalism to Eastern mysticism. I haven't yet seen a book "The Zen of Marketing Performance Measurement", but perhaps that's where we're headed.
The advice in the paper is reasonable, if fairly conventional: ensure sponsorship, articulate goals, identify important metrics, and so on. The paper also stresses the importance of having an underlying enterprise marketing system like, say, the one sold by Unica.
This is useful, so far as it goes. But it doesn't help with the real challenge of measurement projects, which is choosing metrics that support corporate strategies. So far, I haven’t come across a specific methodology for doing this. Most gurus seem to assume a flash of enlightenment will show each organization its own path. Perhaps organizations and strategies are all too different for any methodology to be more specific.
Relying on such insights suggests we have veered from Western rationalism to Eastern mysticism. I haven't yet seen a book "The Zen of Marketing Performance Measurement", but perhaps that's where we're headed.
Monday, June 18, 2007
Plato's View of Marketing Performance Measurement
I reread Plato’s Protagoras over the weekend for a change of pace. What makes that relevant here is Socrates’ contention that virtue is the ability to measure accurately—in particular, the ability to measure the amount of good or evil produced by an activity. Socrates’ logic is that people always seek the greatest amount of good (which he equates with pleasure), so different choices simply result from different judgments about which action will produce the most good.
I don’t find this argument terribly convincing, for reasons I’ll get to shortly. But it certainly resembles the case I’ve made here about the importance of measuring lifetime value as a way to make good business decisions. So, to a certain degree, I share Socrates' apparent frustration that so many people fail to accept the logic of this position—that they should devote themselves to learning to measure the consequences of their decisions.
Of course, the flaw in both Plato’s and my own vision is that people are not purely rational. I’ll leave the philosophical consequences to others, but the implication for business management is you can’t expect people to make decisions solely on the basis of lifetime value: they have too many other, non-rational factors to take into consideration.
It was none other than Protagoras who said “Man is the measure of all things”—and I think it’s fair to assume he would be unlikely to accept the Platonic ideal of marketing measurement, which makes lifetime value the measure of all things instead.
I don’t find this argument terribly convincing, for reasons I’ll get to shortly. But it certainly resembles the case I’ve made here about the importance of measuring lifetime value as a way to make good business decisions. So, to a certain degree, I share Socrates' apparent frustration that so many people fail to accept the logic of this position—that they should devote themselves to learning to measure the consequences of their decisions.
Of course, the flaw in both Plato’s and my own vision is that people are not purely rational. I’ll leave the philosophical consequences to others, but the implication for business management is you can’t expect people to make decisions solely on the basis of lifetime value: they have too many other, non-rational factors to take into consideration.
It was none other than Protagoras who said “Man is the measure of all things”—and I think it’s fair to assume he would be unlikely to accept the Platonic ideal of marketing measurement, which makes lifetime value the measure of all things instead.
Friday, June 15, 2007
Accenture Paper Offers Simplified CRM Planning Approach
As I’ve pointed out many times before, consultants love their 2x2 matrices. Our friends at Accenture have once again illustrated the point with a paper “Surveying and Building Your CRM Future,” whose subtitle promises “a New CRM Software Decision-Making Model”.
Yep, the model is a matrix, dividing users into four categories based on data “density” (volume and update frequency) and business process uniqueness (need for customization). Each combination neatly maps to a different class of CRM software. Specifically:
- High density / low uniqueness is suited to enterprise packages like SAP and Oracle, since there’s a lot of highly integrated data but not too much customization required
- Low density / low uniqueness is suited to Software as a Service (SaaS) products like Salesforce.com since data and customization needs are minimal
- High density / high uniqueness is suited to “composite CRM” suites like Siebel (it’s not clear whether Accenture thinks any other products exist in this group)
- Low density / high uniqueness is suited to specialized “niche” vendors like marketing automation, pricing or analytics systems
In general these are reasonable dimensions, reasonable software classifications and a reasonable mapping of software to user needs. (Of course, some vendors might disagree.) Boundaries in the real world are not quite so distinct, but let's assume that Accenture has knowingly oversimplified for presentation purposes.
A couple of things still bother me. One is the notion that there’s something new here—the paper argues the “old” decision making model was simply based on comparing functions to business requirements, as if this were no longer necessary. Although it’s true that there is something like functional parity in the enterprise and, perhaps, “composite CRM" categories, there are still many significant differences among the SaaS and niche products. More important, business requirements different greatly among companies, and are far from encapsulated by two simple dimensions.
A cynic would point out that companies like Accenture pick one or two tools in each category and have no interest in considering alternatives that might be better suited for a particular client. But am I a cynic?
My other objection is that even though the paper mentions Service Oriented Architectures (SOA) several times, it doesn’t really come to grips with the implications. It relegates SOA to the high density / high latency quadrant: “Essentially, a composite CRM solution is a solution that enables organizations to move toward SOAs.” Then it argues that enterprise packages themselves are migrating in the composite CRM direction. This is rather confusing but seems to imply the two categories will merge.
I think what’s missing here is an acknowledgement that real companies will always have a mix of systems. No firm runs purely on SAP or Oracle enteprise software. Large firms have multiple CRM implementations. Thus there will always be a need to integrate different solutions, regardless of where a company falls on the density and uniqueness dimensions. SOA offers great promise as a way to accomplish this integration. This means it is as likely to break apart the enterprise packages as to become the glue that holds them together.
In short, this paper presents some potentially helpful insights. But there’s still no shortcut around the real work of requirements analysis, vendor evaluation and business planning.
Yep, the model is a matrix, dividing users into four categories based on data “density” (volume and update frequency) and business process uniqueness (need for customization). Each combination neatly maps to a different class of CRM software. Specifically:
- High density / low uniqueness is suited to enterprise packages like SAP and Oracle, since there’s a lot of highly integrated data but not too much customization required
- Low density / low uniqueness is suited to Software as a Service (SaaS) products like Salesforce.com since data and customization needs are minimal
- High density / high uniqueness is suited to “composite CRM” suites like Siebel (it’s not clear whether Accenture thinks any other products exist in this group)
- Low density / high uniqueness is suited to specialized “niche” vendors like marketing automation, pricing or analytics systems
In general these are reasonable dimensions, reasonable software classifications and a reasonable mapping of software to user needs. (Of course, some vendors might disagree.) Boundaries in the real world are not quite so distinct, but let's assume that Accenture has knowingly oversimplified for presentation purposes.
A couple of things still bother me. One is the notion that there’s something new here—the paper argues the “old” decision making model was simply based on comparing functions to business requirements, as if this were no longer necessary. Although it’s true that there is something like functional parity in the enterprise and, perhaps, “composite CRM" categories, there are still many significant differences among the SaaS and niche products. More important, business requirements different greatly among companies, and are far from encapsulated by two simple dimensions.
A cynic would point out that companies like Accenture pick one or two tools in each category and have no interest in considering alternatives that might be better suited for a particular client. But am I a cynic?
My other objection is that even though the paper mentions Service Oriented Architectures (SOA) several times, it doesn’t really come to grips with the implications. It relegates SOA to the high density / high latency quadrant: “Essentially, a composite CRM solution is a solution that enables organizations to move toward SOAs.” Then it argues that enterprise packages themselves are migrating in the composite CRM direction. This is rather confusing but seems to imply the two categories will merge.
I think what’s missing here is an acknowledgement that real companies will always have a mix of systems. No firm runs purely on SAP or Oracle enteprise software. Large firms have multiple CRM implementations. Thus there will always be a need to integrate different solutions, regardless of where a company falls on the density and uniqueness dimensions. SOA offers great promise as a way to accomplish this integration. This means it is as likely to break apart the enterprise packages as to become the glue that holds them together.
In short, this paper presents some potentially helpful insights. But there’s still no shortcut around the real work of requirements analysis, vendor evaluation and business planning.
Thursday, June 14, 2007
Hosted Software Enters the Down Side of the Hype Cycle
“SMB SaaS sales robust, but holdouts remain” reads the headline on a piece from SearchSMB.com Website. (For the acronym impaired, SMB is “small and medium sized business” and SaaS is “software as a service”, a.k.a. hosted systems.) The article quotes two recent surveys, one by Saugatuck Technology and the other by Gartner. According to the article, Saugatuck found “SMB adoption rose from 9% in 2006 to 27% in 2007” among businesses under $1 billion in revenue, while Gartner reported “Only 7% of SMBs strongly believed that SaaS was suitable for their organizations, and only 17% said they would consider SaaS when its adoption became more widespread.”
These seem to be conflicting findings, although it’s impossible to know for certain without looking at the actual surveys and their audiences. But the very appearance of the piece suggests some of the bloom is off the SaaS rose. This is a normal stage in the hype cycle and frankly I’ve been anticipating it for some time. The more interesting question is why SMBs would be reluctant to adopt SaaS.
The article quotes Gartner Vice President and Research Director James Browning as blaming the fact that “SMBs are control freaks” and therefore less willing to trust their data to an outsider than larger, presumably more sophisticated entities. Maybe—although I’ve seen plenty of control freaks at big companies too. The article also mentions difficulties with customization and integration. Again, I suspect that’s a contributing factor but probably not the main one.
A more convincing insight came from an actual SMB manager, who pointed to quality of service issues and higher costs than in-house systems. I personally suspect the cost issue is the real one: whether or not they’re control freaks, SMBs are definitely penny-pinchers. That’s what happens when it’s your own money. (I say this as someone who’s run my own Very Small Business for many years.) On a more detailed financial level, SMBs have less formal capital appropriation processes than big companies, so their managers have less incentive to avoid the capital expense by purchasing SaaS products through their operating budgets.
One point the article doesn’t mention is that SaaS prices have gone up considerably, at least among the major vendors. This shifts the economics in favor of in-house systems, particularly since many SMBs can use low cost products that larger companies would not accept. This pricing shift makes sense from the vendors’ standpoint: as SaaS is accepted at larger companies with deeper pockets, it makes sense to raise prices to match. Small businesses may need to look beyond the market leaders to find pricing they can afford.
These seem to be conflicting findings, although it’s impossible to know for certain without looking at the actual surveys and their audiences. But the very appearance of the piece suggests some of the bloom is off the SaaS rose. This is a normal stage in the hype cycle and frankly I’ve been anticipating it for some time. The more interesting question is why SMBs would be reluctant to adopt SaaS.
The article quotes Gartner Vice President and Research Director James Browning as blaming the fact that “SMBs are control freaks” and therefore less willing to trust their data to an outsider than larger, presumably more sophisticated entities. Maybe—although I’ve seen plenty of control freaks at big companies too. The article also mentions difficulties with customization and integration. Again, I suspect that’s a contributing factor but probably not the main one.
A more convincing insight came from an actual SMB manager, who pointed to quality of service issues and higher costs than in-house systems. I personally suspect the cost issue is the real one: whether or not they’re control freaks, SMBs are definitely penny-pinchers. That’s what happens when it’s your own money. (I say this as someone who’s run my own Very Small Business for many years.) On a more detailed financial level, SMBs have less formal capital appropriation processes than big companies, so their managers have less incentive to avoid the capital expense by purchasing SaaS products through their operating budgets.
One point the article doesn’t mention is that SaaS prices have gone up considerably, at least among the major vendors. This shifts the economics in favor of in-house systems, particularly since many SMBs can use low cost products that larger companies would not accept. This pricing shift makes sense from the vendors’ standpoint: as SaaS is accepted at larger companies with deeper pockets, it makes sense to raise prices to match. Small businesses may need to look beyond the market leaders to find pricing they can afford.
Wednesday, June 13, 2007
Autonomy Ultraseek Argues There's More to Search Than You-Know-Who
In case I didn’t make myself clear yesterday, my conclusion about balanced scorecard software is that the systems themselves are not very interesting, even though the concept itself can be extremely valuable. There’s nothing wrong with that: payroll software also isn’t very interesting, but people care deeply that it works correctly. In the case of balanced scorecards, you just need something to display the data—fancy dashboard-style interfaces are possible but not really the point. Nor is there much mystery about the underlying technology. All the value and all the art lie elsewhere: in picking the right measures and making sure managers pay attention to what the scorecards are telling them.
I only bring this up to explain why I won’t be writing much about balanced scorecard systems. In a word (and with all due respect, and stressing again that the application is important), I find them boring.
Contrast this with text search systems. These, I find fascinating. The technology is delightfully complicated and subtle differences among systems can have big implications for how well they serve particular purposes. Plus, as I mentioned a little while ago, there is some interesting convergence going on between search technology and data integration systems.
One challenge facing search vendors today is the dominance of Google. I hadn’t really given this much thought, but reading the white paper “Business Search vs. Consumer Search” (registration required) from Autonomy’s Ultraseek product group, it became clear that they are finding Google to be major competition. The paper doesn’t mention Google by name, but everything from the title on down is focused on explaining why there are “fundamental differences between searching for information on the Internet and finding the right document quickly inside your corporate intranets, public websites and partner extranets.”
The paper states Ultraseek’s case well. It mentions five specific differences between “consumer” search on the Web and business search:
- business users have different, known roles which can be used to tune results
- business users can employ category drill-down, metadata, and other alternatives to keyword searches
- business searches must span multiple repositories, not just Web pages
- business repositories are in many different formats and languages
- business searches are constrained by security and different user authorities
Ultraseek overstates its case in a few areas. Consumer search can use more than just keywords, and in fact can employ quite a few of the text analysis methods that Ultraseek mentions as business-specific. Consumer search is also working on moving beyond Web pages to different repositories, formats and languages. But known user roles and security issues are certainly more relevant to business than consumer search engines. And, although Ultraseek doesn’t mention it, Web search engines don't generally support some other features, like letting content owners tweak results to highlight particular items, that may matter in a business context.
But, over all, the point is well taken: there really is a lot more to search than Google. People need to take the time to find the right tool for the job at hand.
I only bring this up to explain why I won’t be writing much about balanced scorecard systems. In a word (and with all due respect, and stressing again that the application is important), I find them boring.
Contrast this with text search systems. These, I find fascinating. The technology is delightfully complicated and subtle differences among systems can have big implications for how well they serve particular purposes. Plus, as I mentioned a little while ago, there is some interesting convergence going on between search technology and data integration systems.
One challenge facing search vendors today is the dominance of Google. I hadn’t really given this much thought, but reading the white paper “Business Search vs. Consumer Search” (registration required) from Autonomy’s Ultraseek product group, it became clear that they are finding Google to be major competition. The paper doesn’t mention Google by name, but everything from the title on down is focused on explaining why there are “fundamental differences between searching for information on the Internet and finding the right document quickly inside your corporate intranets, public websites and partner extranets.”
The paper states Ultraseek’s case well. It mentions five specific differences between “consumer” search on the Web and business search:
- business users have different, known roles which can be used to tune results
- business users can employ category drill-down, metadata, and other alternatives to keyword searches
- business searches must span multiple repositories, not just Web pages
- business repositories are in many different formats and languages
- business searches are constrained by security and different user authorities
Ultraseek overstates its case in a few areas. Consumer search can use more than just keywords, and in fact can employ quite a few of the text analysis methods that Ultraseek mentions as business-specific. Consumer search is also working on moving beyond Web pages to different repositories, formats and languages. But known user roles and security issues are certainly more relevant to business than consumer search engines. And, although Ultraseek doesn’t mention it, Web search engines don't generally support some other features, like letting content owners tweak results to highlight particular items, that may matter in a business context.
But, over all, the point is well taken: there really is a lot more to search than Google. People need to take the time to find the right tool for the job at hand.
Tuesday, June 12, 2007
Looking for Balanced Scorecard Software
I haven’t been able to come up with an authoritative list of major balanced scorecard software vendors. UK-based consultancy 2GC lists more than 100 in a helpful database with little blurbs on each, but they include performance management systems that are not necessarily for balanced scorecards. The Balanced Scorecard Collaborative, home of balanced scorecard co-inventor David P. Norton, lists two dozen products they have certified as meeting true balanced scorecard criteria. Of these, more than half belong non-specialist companies including enterprise software (Oracle, Peoplesoft [now Oracle], SAP, Infor, Rocket Software) and broad business intelligence systems (Business Objects, Cognos, Hyperion [now Oracle], Information Builders, Pilot Software [now SAP], SAS). Most of these firms have purchased specialist products. The remaining vendors (Active Strategy, Bitam, Consist FlexSI, Corporater, CorVu, InPhase, Intalev, PerformanceSoft [now Actuate], Procos, Prodacapo, QPR and Vision Grupos Consultores) are a combination of performance management specialists and regional consultancies.
That certified products are available from all the major enterprise and business intelligence vendors shows the basic functions needed for balanced scorecard are well understood and widely available. I’m sure there are differences among the products but suspect their choice of system will rarely be critical to project success or failure. The core functions are creation of strategy maps and cascading scorecards. I suspect systems vary more widely in their ability to import and transform scorecard data. A number of products also include project management functions such as task lists and milestone reporting. This is probably outside of the core requirements for balanced scorecard but does make sense in the larger context of providing tools to help meet business goals.
If your idea of a good time is playing with this sort of system (and whose isn’t?), Strategy Map offers a fully functional personal version for free.
That certified products are available from all the major enterprise and business intelligence vendors shows the basic functions needed for balanced scorecard are well understood and widely available. I’m sure there are differences among the products but suspect their choice of system will rarely be critical to project success or failure. The core functions are creation of strategy maps and cascading scorecards. I suspect systems vary more widely in their ability to import and transform scorecard data. A number of products also include project management functions such as task lists and milestone reporting. This is probably outside of the core requirements for balanced scorecard but does make sense in the larger context of providing tools to help meet business goals.
If your idea of a good time is playing with this sort of system (and whose isn’t?), Strategy Map offers a fully functional personal version for free.
Monday, June 11, 2007
Why Balanced Scorecards Haven't Succeeded at Marketing Measurement
All this thinking about the overwhelming number of business metrics has naturally led me consider balanced scorecards as a way to organize metrics effectively. I think it’s fair to say that balanced scorecards have had only modest success in the business world: the concept is widely understood, but far from universally employed.
Balanced scorecards make an immense amount of sense. A disciplined scorecard process begins with strategy definition followed by a strategy map, which identifies the measures most important to a business and how they are relate to each other and final results. Once the top-level scorecard is built, subsidiary scorecards report on components that contribute to the top-level measures, providing more focused information and targets for lower-level managers.
That’s all great. But my problem with scorecards, and I suspect the reason they haven’t been used more widely, is they don’t make a quantifiable link between scorecard measures and business results. Yes, something like on-time arrivals may be a critical success factor for an airline, and thus appear on its scorecard. That scorecard will even give a target value to compare with actual performance. But it won’t show the financial impact of missing the target—for example, every 1% shortfall vs. the target on-time arrival rate translates into $10 million in lost future value. Proponents would argue (a) this value is impossible to calculate because there are so many intervening factors and (b) so long as managers are rewarded for meeting targets (or punished for not meeting them), that’s incentive enough. But I believe senior managers are rightfully uncomfortable setting those sorts of targets and reward systems unless the relationships between the targets and financial results are known. Otherwise, they risk disproportionately rewarding the selected behaviors, thereby distorting management priorities and ultimately harming business results.
Loyal readers of this blog might expect me to propose lifetime value as a better alternative. It probably is, but the lukewarm response it elicits from most managers has left me cautious. Whether managers don’t trust LTV calculations because they’re too speculative, or (more likely) are simply focused on short-term results, it’s pretty clear that LTV will not be the primary measurement tool in most organizations. I haven’t quite given up hope that LTV will ultimately receive its due, but for now feel it makes more sense to work with other measures that managers find more compelling.
Balanced scorecards make an immense amount of sense. A disciplined scorecard process begins with strategy definition followed by a strategy map, which identifies the measures most important to a business and how they are relate to each other and final results. Once the top-level scorecard is built, subsidiary scorecards report on components that contribute to the top-level measures, providing more focused information and targets for lower-level managers.
That’s all great. But my problem with scorecards, and I suspect the reason they haven’t been used more widely, is they don’t make a quantifiable link between scorecard measures and business results. Yes, something like on-time arrivals may be a critical success factor for an airline, and thus appear on its scorecard. That scorecard will even give a target value to compare with actual performance. But it won’t show the financial impact of missing the target—for example, every 1% shortfall vs. the target on-time arrival rate translates into $10 million in lost future value. Proponents would argue (a) this value is impossible to calculate because there are so many intervening factors and (b) so long as managers are rewarded for meeting targets (or punished for not meeting them), that’s incentive enough. But I believe senior managers are rightfully uncomfortable setting those sorts of targets and reward systems unless the relationships between the targets and financial results are known. Otherwise, they risk disproportionately rewarding the selected behaviors, thereby distorting management priorities and ultimately harming business results.
Loyal readers of this blog might expect me to propose lifetime value as a better alternative. It probably is, but the lukewarm response it elicits from most managers has left me cautious. Whether managers don’t trust LTV calculations because they’re too speculative, or (more likely) are simply focused on short-term results, it’s pretty clear that LTV will not be the primary measurement tool in most organizations. I haven’t quite given up hope that LTV will ultimately receive its due, but for now feel it makes more sense to work with other measures that managers find more compelling.
Friday, June 08, 2007
So Many Measures, So Little Time
I’ve been collating lists of marketing performance metrics from different sources, which is exactly as much fun as it sounds. One result that struck me was how little overlap I found: on two big lists of just over 100 metrics each, there were only 24 in common. These were fundamental concepts like market share, customer lifetime value, gross rating points, and clickthrough rate. Oddly enough, some metrics that I consider very basic were totally absent, such as number of campaigns and average campaign size. (These are used to measure staff productivity and degree of targeting.) I think the lesson here is that there is an infinite number of possible metrics, and what’s important is finding or inventing the right ones for each situation. A related lesson is that there is no agreed-upon standard set of metrics to start from.
I also found I could divide the metrics into three fundamental groups. Two were pretty much expected: corporate metrics related to financial results, customers and market position (i.e., brand value); and execution metrics related to advertising, retail, salesforce, Internet, dealers, etc. The third group, which took me a while to recognize, was product metrics: development cost, customer needs, number of SKUs, repair cost, revenue per unit, and so on. Most discussions of the topic don’t treat product metrics as a distinct category, but it’s clearly different from the other two. Of course, many product attributes are not controlled by marketing, particularly in the short term. But it’s still important to know about them since they can have a major impact on marketing results.
Incidentally, this brings up another dimension that I’ve found missing in most discussions, which often classify metrics in a sequence of increasing sophistication, such as activity measures, results measures and leading indicators. Such schemes have no place for metrics based on external factors such as competitor behavior, customer needs, or economic conditions--even though such metrics are present in the metrics lists. Such items are by definition beyond the control of the marketers being measured, so in a sense it’s wrong to consider them as marketing performance metrics. But they definitely impact marketing results, so, like product attributes, they are needed as explanatory factors in any analysis.
I also found I could divide the metrics into three fundamental groups. Two were pretty much expected: corporate metrics related to financial results, customers and market position (i.e., brand value); and execution metrics related to advertising, retail, salesforce, Internet, dealers, etc. The third group, which took me a while to recognize, was product metrics: development cost, customer needs, number of SKUs, repair cost, revenue per unit, and so on. Most discussions of the topic don’t treat product metrics as a distinct category, but it’s clearly different from the other two. Of course, many product attributes are not controlled by marketing, particularly in the short term. But it’s still important to know about them since they can have a major impact on marketing results.
Incidentally, this brings up another dimension that I’ve found missing in most discussions, which often classify metrics in a sequence of increasing sophistication, such as activity measures, results measures and leading indicators. Such schemes have no place for metrics based on external factors such as competitor behavior, customer needs, or economic conditions--even though such metrics are present in the metrics lists. Such items are by definition beyond the control of the marketers being measured, so in a sense it’s wrong to consider them as marketing performance metrics. But they definitely impact marketing results, so, like product attributes, they are needed as explanatory factors in any analysis.
Thursday, June 07, 2007
Ace Hardware Fits Ads to Customer Context
As you almost certainly didn’t notice, I didn’t make a blog post yesterday. For no logical reason, this makes me feel guilty. So, since I happened to just see an interesting article, I’ll make two today.
A piece in this week’s BrandWeek describes a promotion by Ace Hardware that will allow people who are tracking a hurricane to find a nearby hardware store ("Look Like Rain? Ace Hardware Hopes So", BrandWeek, June 6, 2007).
This is a great example of using customer context in marketing—one of the core tenets of the Customer Experience Matrix. It’s a particularly powerful because it uses context twice: first, in identifying customers who are likely to be located in a hurricane-prone area, and second, because it gives them information about their local hardware store. If they added a mobile-enabled feature that included realtime driving directions, I’d have to give them some sort of award.
A piece in this week’s BrandWeek describes a promotion by Ace Hardware that will allow people who are tracking a hurricane to find a nearby hardware store ("Look Like Rain? Ace Hardware Hopes So", BrandWeek, June 6, 2007).
This is a great example of using customer context in marketing—one of the core tenets of the Customer Experience Matrix. It’s a particularly powerful because it uses context twice: first, in identifying customers who are likely to be located in a hurricane-prone area, and second, because it gives them information about their local hardware store. If they added a mobile-enabled feature that included realtime driving directions, I’d have to give them some sort of award.
eWeek: Semantic Web Shows Convergence of Search and Data Integration
This week’s eWeek has an unusually lucid article explaining the Semantic Web. The article presents the Semantic Web as a way to tag information in a structured way and make it searchable via the Web. I think this oversimplifies a bit by leaving out the importance of the relationships among the tags, which are part of the “semantic” framework and what makes the queries able to return non-trivial results. But no matter—the article gives a clear description of the end result (querying the Web like a database), and that’s quite helpful.
From my personal perspective, it was intriguing that the article also quoted Web creator Tim Berners-Lee as stating "The number one role of Semantic Web technologies is data integration across applications." This supports my previous contention that search applications and data integration (specifically, data matching) tools are starting to overlap. Of course, I was coming at it from the opposite direction, specifically suggesting that data matching technologies would help to improve searches of unstructured data. The article is suggesting that a search application (Semantic Web) would help to integrate structured data. But either way, some cross-pollination is happening today and a full convergence could follow.
From my personal perspective, it was intriguing that the article also quoted Web creator Tim Berners-Lee as stating "The number one role of Semantic Web technologies is data integration across applications." This supports my previous contention that search applications and data integration (specifically, data matching) tools are starting to overlap. Of course, I was coming at it from the opposite direction, specifically suggesting that data matching technologies would help to improve searches of unstructured data. The article is suggesting that a search application (Semantic Web) would help to integrate structured data. But either way, some cross-pollination is happening today and a full convergence could follow.
Tuesday, June 05, 2007
A Small But Useful Thought
I’ve been continuing my research into marketing performance measurement. Nothing earth-shattering to report, but I did come across one idea worth sharing. I saw a couple of examples where a dashboard graph displayed two measures that represent trade-offs: say, inventory level vs. out-of-stock conditions, or call center time per call vs. call center cross-sell revenue.
Showing two compensatory metrics together at least ensures the implicit trade-off is visible. Results must still be related to ultimate business value to check whether the net change is positive or negative (e.g., is the additional cross-sell revenue worth more than additional call time?) But just showing the net value alone would hide the underlying changes in the business. So I think it’s more useful to show the measures themselves.
Showing two compensatory metrics together at least ensures the implicit trade-off is visible. Results must still be related to ultimate business value to check whether the net change is positive or negative (e.g., is the additional cross-sell revenue worth more than additional call time?) But just showing the net value alone would hide the underlying changes in the business. So I think it’s more useful to show the measures themselves.
Sunday, June 03, 2007
Data Visualization Is Just One Part of a Dashboard System
Following Friday’s post on dashboard software, I want to emphasize that data visualization techniques are really just one element of those systems, and not necessarily the most important. Dashboard systems must gather data from source systems; transform and consolidate it; place it in structures suited for high-speed display and analysis; identify patterns, correlations and exceptions; and make it accessible to different users within the constraints of user interests, skills and authorizations. Although I haven’t researched the dashboard products in depth, even a cursory glance at their Web sites suggests they vary widely in these areas.
As with any kind of analytical system, most of the work and most of the value in dashboards will be in the data gathering. Poor visualization of good data can be overcome; good visualization of poor data is basically useless. So users should focus their attention on the underlying capabilities and not be distracted by display alone.
As with any kind of analytical system, most of the work and most of the value in dashboards will be in the data gathering. Poor visualization of good data can be overcome; good visualization of poor data is basically useless. So users should focus their attention on the underlying capabilities and not be distracted by display alone.
Friday, June 01, 2007
Dashboard Software: Finding More than Flash
I’ve been reading a lot about marketing performance metrics recently, which turns out to be a drier topic than I can easily tolerate—and I have a pretty high tolerance for dry. To give myself a bit of a break without moving too far afield, I decided to research marketing dashboard software. At least that let me look at some pretty pictures.
Sadly, the same problem that afflicts discussions of marketing metrics affects most dashboard systems: what they give you is a flood of disconnected information without any way to make sense of it. Most of the dashboard vendors stress their physical display capabilities—how many different types of displays they provide, how much data they can squeeze onto a page, how easily you can build things—and leave the rest to you. What this comes down to is: they let you make bigger, prettier mistakes faster.
Two exceptions did crop up that seem worth mentioning.
- ActiveStrategy builds scorecards that are specifically designed to link top-level business strategy with lower-level activities and results. They refer to this as “cascading” scorecards and that seems a good term to illustrate the relationship. I suppose this isn’t truly unique; I recollect the people at SAS showing me a similar hierarchy of key performance indicators, and there are probably other products with a cascading approach. Part of this may be the difference between dashboards and scorecards. Still, if nothing else, ActiveStrategy is doing a particularly good job of showing how to connect data with results.
- VisualAcuity doesn’t have the same strategic focus, but it does seek more effective alternatives to the normal dashboard display techniques. As their Web site puts it, “The ability to assimilate and make judgments about information quickly and efficiently is key to the definition of a dashboard. Dashboards aren’t intended for detailed analysis, or even great precision, but rather summary information, abbreviated in form and content, enough to highlight exceptions and initiate action.” VisualAcuity dashboards rely on many small displays and time-series graphs to do this.
Incidentally, if you’re just looking for something different, FYIVisual uses graphics rather than text or charts in a way that is probably very efficient at uncovering patterns and exceptions. It definitely doesn’t address the strategy issue and may or may not be more effective than more common display techniques. But at least it’s something new to look at.
Sadly, the same problem that afflicts discussions of marketing metrics affects most dashboard systems: what they give you is a flood of disconnected information without any way to make sense of it. Most of the dashboard vendors stress their physical display capabilities—how many different types of displays they provide, how much data they can squeeze onto a page, how easily you can build things—and leave the rest to you. What this comes down to is: they let you make bigger, prettier mistakes faster.
Two exceptions did crop up that seem worth mentioning.
- ActiveStrategy builds scorecards that are specifically designed to link top-level business strategy with lower-level activities and results. They refer to this as “cascading” scorecards and that seems a good term to illustrate the relationship. I suppose this isn’t truly unique; I recollect the people at SAS showing me a similar hierarchy of key performance indicators, and there are probably other products with a cascading approach. Part of this may be the difference between dashboards and scorecards. Still, if nothing else, ActiveStrategy is doing a particularly good job of showing how to connect data with results.
- VisualAcuity doesn’t have the same strategic focus, but it does seek more effective alternatives to the normal dashboard display techniques. As their Web site puts it, “The ability to assimilate and make judgments about information quickly and efficiently is key to the definition of a dashboard. Dashboards aren’t intended for detailed analysis, or even great precision, but rather summary information, abbreviated in form and content, enough to highlight exceptions and initiate action.” VisualAcuity dashboards rely on many small displays and time-series graphs to do this.
Incidentally, if you’re just looking for something different, FYIVisual uses graphics rather than text or charts in a way that is probably very efficient at uncovering patterns and exceptions. It definitely doesn’t address the strategy issue and may or may not be more effective than more common display techniques. But at least it’s something new to look at.