I had a product briefing from Teradata earlier this week after not talking for nearly two years. They are getting ready to release version 6 of their marketing automation software, Teradata Relationship Manager (formerly Teradata CRM). The new version has a revamped user interface and large number of minor refinements such as allowing multiple levels of control groups. But the real change is technical: the system has been entirely rebuilt on a J2EE platform. This was apparently a huge effort – when I checked my notes from two years ago, Teradata was talking about releasing the same version 6 with pretty much the same changes. My contact at Teradata told me the delay was due to difficulties with the migration. She promises the current schedule for releasing version 6 by December will definitely be met.
I’ll get back to v6 in a minute, but did want to mention the other big news out of Teradata recently: alliances Assetlink and Infor for marketing automation enhancements, and with SAS Institute for analytic integration. Each deal has its own justification, but it’s hard not to see them as showing a new interest in cooperation at Teradata, whose proprietary technology has long kept it isolated from the rest of the industry. The new attitude might be related to Teradata’s spin-off from NCR, completed October 1, which presumably frees (or forces) management to consider options it rejected while inside the NCR family. It might also reflect increasing competition from database appliances like Netezza, DATAllegro, and Greenplum. (The Greenplum Web site offers links to useful Gartner and Ventana Research papers if you want to look at the database appliance market in more detail.)
But I digress. Let’s talk first about the alliances and then v6.
The Assetlink deal is probably the more significant yet least surprising new arrangement. Assetlink is one of the most complete marketing resource management suites, so it gives Teradata a quick way to provide a set of features that are now standard in enterprise marketing systems. (Teradata had an earlier alliance in this area with Aprimo, but that never took hold. Teradata mentioned technical incompatibility with Aprimo’s .NET foundation as well as competitive overlap with Aprimo’s own marketing automation software.) In the all-important area of integration, Assetlink and Teradata will both run on the same data structures and coordinate their internal processes, so they should work reasonably seamlessly. Assetlink still has its own user interface and workflow engine, though, so some separation will still be apparent. Teradata stressed that it will be investing to create a version of Assetlink that runs on the Teradata database and will sell that under the Teradata brand.
The Infor arrangement is a little more surprising because Infor also has its own marketing automation products (the old Epiphany system) and because Infor is more oriented to mid-size businesses than the giant retailers, telcos, and others served by Teradata. Perhaps the separate customer bases make the competitive issue less important. In any event, the Infor alliance is limited to Infor’s real time decision engine, currently known as CRM Epiphany Inbound Marketing, which was always Epiphany’s crown jewel. Like Assetlink, Infor gives Teradata a quick way to offer a capability (real time interaction management, including self-adjusting predictive models) that is increasingly requested by clients and offered by competitors. Although Epiphany is also built on J2EE, the initial integration (available today) will still be limited: the software will run on a separate server using SQL Server as its data store. A later release, due in the first quarter of next year, will still have a separate server but connect directly with the Teradata database. Even then, though, real-time interaction flows will be defined outside of Teradata Relationship Manager. Integration will be at the data level: Teradata will provide lists of customers are eligible for different offers and will be notified of interaction results. Teradata will be selling its own branded version of the Infor product too.
The SAS alliance is described as a “strategic partnership” in the firms' joint press release, which sounds jarring from two previous competitors. Basically, it involves running SAS analytic functions inside of the Teradata. This turns out to be part of a larger SAS initiative called “in-database processing” which seeks similar arrangements with other database vendors. Teradata is simply the first partner to be announced, so maybe the relationship isn’t so special after all. On the other hand, the companies’ joint roadmap includes deeper integration of selected SAS “solutions” with Teradata, including mapping of industry-specific SAS logical data models to corresponding Teradata structures. The companies will also create a joint technical “center of excellence” where specialists from both firms will help clients improve performance of SAS and Teradata products. We’ll see whether other database vendors work this closely with SAS. In the specific area of marketing automation, the two vendors will continue to compete head-to-head, at least for the time being.
This brings us back to Teradata Relationship Manager itself. As I already mentioned, v6 makes major changes at the deep technical level and in the user interface, but the 100+ changes in functionality are relatively minor. In other words, the functional structure of the product is the same.
This structure has always been different from other marketing automation systems. What sets Teradata apart is a very systematic approach to the process of customer communications: it’s not simply about matching offers to customers, but about managing all the components that contribute to those offers. For example, communication plans are built up from messages, which contain collateral, channels and response definitions, and the collateral itself may contain personalized components. Campaigns are created by attaching communication plans to segment plans, which are constructed from individual segments. All these elements in turn are subject to cross-campaign constraints on channel capacity, contacts per customer, customer channel preferences, and message priorities. In other words, everything is related to everything else in a very logical, precise fashion – just like a database design. Did I mention that Teradata is a database company?
This approach takes some practice before you understand how the parts are connected – again, like a sophisticated database. It can also make simple tasks seem unnecessarily complicated. But it rewards patient users with a system that handles complex tasks accurately and supports high volumes without collapsing. For example, managing customers across channels is very straightforward because all channels are structurally equivalent.
The functional capabilities of Relationship Manager are not so different from Teradata’s main competitors (SAS Marketing Automation and Unica). But those products have evolved incrementally, often through acquisition, and parts are still sold as separate components. It’s probably fair to say that they not as tightly or logically integrated as Teradata.
This very tight integration also has drawbacks, since any changes to the data structure need careful consideration. Teradata definitely has a tendency to fit new functions into existing structures, such as setting up different types of campaigns (outbound, multi-step, inbound) through a single interface. Sometimes that’s good; sometimes it’s just easier to do different things in different ways.
Teradata has also been something of a laggard at integrating statistical modeling into its system. Even what it calls “optimization” is rule-based rather than the constrained statistical optimization offered by other vendors. I’m actually rather fond of Teradata’s optimization approaches: its ability to allocate leads across channels based on sophisticated capacity rules (e.g., minimum and maximum volumes from different campaigns; automatically sending overflow from one channel to another; automatically reallocating leads based on current work load) has always impressed me and I believe remains unrivaled. But allowing marketers to build and deploy true predictive models is increasingly important and, unless I’ve missed something, is still not offered by Teradata.
This is why the new alliances are so intriguing. Assetlink adds a huge swath of capabilities that Teradata otherwise would have very slowly and painstakingly created by expanding its core data model. Infor and SAS both address the analytical weaknesses of the existing system, while Infor in particular adds another highly desired feature without waiting to build new structures in-house. All these changes suggest a welcome sense of urgency in responding quickly to customer needs. If this new attitude holds true, it seems unlikely that Teradata will accept another two year delay in the release of Relationship Manager version 7.
This is the blog of David M. Raab, marketing technology consultant and analyst. Mr. Raab is founder and CEO of the Customer Data Platform Institute and Principal at Raab Associates Inc. All opinions here are his own. The blog is named for the Customer Experience Matrix, a tool to visualize marketing and operational interactions between a company and its customers.
Wednesday, October 31, 2007
Thursday, October 25, 2007
Business Rules Forum Targets Enterprise Decisioning as the Next Big Thing
I’m headed back from the combined Business Rules Forum / Rules Technology Summit / Rules Expo conference in Orlando. Theme of the conference was ‘Enterprise Decisioning Comes of Age’. The general idea is that business rules have been applied extensively in a few areas, including fraud detection and insurance rating, and are now poised to play a larger role in coordinating decisions throughout the enterprise. This idea has been championed in particular by James Taylor, formerly of Fair Isaac and now running his own firm Smart (Enough) Systems, who seems to have near rock star status within this community.
This is all music to my own ears, since the Customer Experience Matrix is precisely designed as a tool to help enterprises understand how all their pieces fit together across channels and the customer life cycle. It therefore provides an essential framework for organizing, prioritizing and integrating enterprise decision projects. Whether the generic notion of ‘enterprise decision management’ really takes off as the Next Big Thing remains to be seen – it still sounds pretty geeky. One question is what it takes for CEOs to get really excited about the concept: because it is by definition enterprise wide, it takes CEO sponsorship to make it happen. Neither James nor other panelists at the discussions I heard really had an answer for that one.
This is all music to my own ears, since the Customer Experience Matrix is precisely designed as a tool to help enterprises understand how all their pieces fit together across channels and the customer life cycle. It therefore provides an essential framework for organizing, prioritizing and integrating enterprise decision projects. Whether the generic notion of ‘enterprise decision management’ really takes off as the Next Big Thing remains to be seen – it still sounds pretty geeky. One question is what it takes for CEOs to get really excited about the concept: because it is by definition enterprise wide, it takes CEO sponsorship to make it happen. Neither James nor other panelists at the discussions I heard really had an answer for that one.
Thursday, October 18, 2007
Neolane Offers a New Marketing Automation Option
Neolane, a Paris-based marketing automation software vendor, formally announced its entry to the U.S. market last week. I’ve been tracking Neolane for some time but chose not to write about it until they established a U.S. presence. So now the story can be told.
Neolane is important because it’s a full-scale competitor to Unica and the marketing automation suites of SAS and Teradata, which have pretty much had the high-end market to themselves in recent years. (You might add SmartFocus and Alterian to the list, but they sell mostly to service providers rather than end-users.) The company originally specialized in email marketing but has since broadened to incorporate other channels. Its email heritage still shows in strong content management and personalization capabilities. These are supplemented by powerful collaborative workflow, project management and marketing planning. Like many European products, Neolane was designed from the ground up to support trans-national deployments with specialized features such as multiple languages and currencies. The company, founded in 2001, now has over 100 installed clients. These include many very large firms such as Carrefour, DHL International and Virgin Megastores.
In evaluating enterprise marketing systems, I look at the five sets of capabilities: planning/budgeting; project management; content management; execution, and analysis. (Neolane itself offers a different set of five capabilities, although they are pretty similar.) Let’s go through these in turn.
Neolane does quite well in the first three areas, which all draw on its robust workflow management engine. This engine is part of Neolane’s core technology, which allows tight connections with the rest of the system. By contrast, many Neolane competitors have added these three functions at least in part through acquisition. This often results in less-than-perfect integration among the suite components.
Execution is a huge area, so we’ll break it into pieces. Neolane’s roots in email result in a strong email capability, of course. The company also claims particular strength in mobile (phone) marketing, although it’s not clear this involves more than supporting SMS and MMS output formats. Segmentation and selection features are adequate but not overly impressive: when it comes to really complex queries, Neolane users may find themselves relying on hand-coded SQL statements or graphical flow charts that can quickly become unmanageable. Although most of today’s major campaign management systems have deployed special grid-based interfaces to handle selections with hundreds or thousands of cells, I didn’t see that in Neolane.
On the other hand, Neolane might argue that its content personalization features reduce the need for building so many segments in the first place. Personalization works the same across all media: users embed selection rules within templates for email, Web pages and other messages. This is a fairly standard approach, but Neolane offers a particularly broad set of formats. It also provides different ways to build the rules, ranging from a simple scripting language to a point-and-click query builder. Neolane’s flow charts allow an additional level of personalized treatment, supporting sophisticated multi-step programs complete with branching logic. That part of the system seems quite impressive.
Apart from personalization, Neolane doesn’t seem to offer execution features for channels such as Web sites, call centers and sales automation. Nor, so far as I can tell, does it offer real-time interaction management—that is, gathering information about customer behavior during an interaction and determining an appropriate response. This is still a fairly specialized area and one where the major marketing automation vendors are just now delivering real products, after talking about it for years. This still puts them ahead of Neolane.
Execution also includes creation and management of the marketing database itself. Like most of its competitors, Neolane generally connects to a customer database built by an external system. (The exceptions would be enterprise suites like Oracle/Siebel and SAP, which manage the databases themselves. Yet even they tend to extract operational data into separate structures for marketing purposes.) Neolane does provide administrative tools for users to define database columns and tables, so it’s fairly easy to add new data if there’s no place else to store it. This would usually apply to administrative components such as budgets and planning data or to marketing-generated information such as campaign codes.
Analytics is the final function. Neolane does provide standard reporting. But it relies on connections to third-party software including SPSS and KXEN for more advanced analysis and predictive modeling. This is a fairly common approach and nothing to particularly complain about, although you do need to look closely to ensure that the integration is suitably seamless.
Over all, Neolane does provide a broad set of marketing functions, although it may not be quite as strong as its major competitors in some areas of execution and analytics. Still, it’s a viable new choice in a market that has offered few alternatives in recent years. So for companies considering a new system, it’s definitely worth a look.
Neolane is important because it’s a full-scale competitor to Unica and the marketing automation suites of SAS and Teradata, which have pretty much had the high-end market to themselves in recent years. (You might add SmartFocus and Alterian to the list, but they sell mostly to service providers rather than end-users.) The company originally specialized in email marketing but has since broadened to incorporate other channels. Its email heritage still shows in strong content management and personalization capabilities. These are supplemented by powerful collaborative workflow, project management and marketing planning. Like many European products, Neolane was designed from the ground up to support trans-national deployments with specialized features such as multiple languages and currencies. The company, founded in 2001, now has over 100 installed clients. These include many very large firms such as Carrefour, DHL International and Virgin Megastores.
In evaluating enterprise marketing systems, I look at the five sets of capabilities: planning/budgeting; project management; content management; execution, and analysis. (Neolane itself offers a different set of five capabilities, although they are pretty similar.) Let’s go through these in turn.
Neolane does quite well in the first three areas, which all draw on its robust workflow management engine. This engine is part of Neolane’s core technology, which allows tight connections with the rest of the system. By contrast, many Neolane competitors have added these three functions at least in part through acquisition. This often results in less-than-perfect integration among the suite components.
Execution is a huge area, so we’ll break it into pieces. Neolane’s roots in email result in a strong email capability, of course. The company also claims particular strength in mobile (phone) marketing, although it’s not clear this involves more than supporting SMS and MMS output formats. Segmentation and selection features are adequate but not overly impressive: when it comes to really complex queries, Neolane users may find themselves relying on hand-coded SQL statements or graphical flow charts that can quickly become unmanageable. Although most of today’s major campaign management systems have deployed special grid-based interfaces to handle selections with hundreds or thousands of cells, I didn’t see that in Neolane.
On the other hand, Neolane might argue that its content personalization features reduce the need for building so many segments in the first place. Personalization works the same across all media: users embed selection rules within templates for email, Web pages and other messages. This is a fairly standard approach, but Neolane offers a particularly broad set of formats. It also provides different ways to build the rules, ranging from a simple scripting language to a point-and-click query builder. Neolane’s flow charts allow an additional level of personalized treatment, supporting sophisticated multi-step programs complete with branching logic. That part of the system seems quite impressive.
Apart from personalization, Neolane doesn’t seem to offer execution features for channels such as Web sites, call centers and sales automation. Nor, so far as I can tell, does it offer real-time interaction management—that is, gathering information about customer behavior during an interaction and determining an appropriate response. This is still a fairly specialized area and one where the major marketing automation vendors are just now delivering real products, after talking about it for years. This still puts them ahead of Neolane.
Execution also includes creation and management of the marketing database itself. Like most of its competitors, Neolane generally connects to a customer database built by an external system. (The exceptions would be enterprise suites like Oracle/Siebel and SAP, which manage the databases themselves. Yet even they tend to extract operational data into separate structures for marketing purposes.) Neolane does provide administrative tools for users to define database columns and tables, so it’s fairly easy to add new data if there’s no place else to store it. This would usually apply to administrative components such as budgets and planning data or to marketing-generated information such as campaign codes.
Analytics is the final function. Neolane does provide standard reporting. But it relies on connections to third-party software including SPSS and KXEN for more advanced analysis and predictive modeling. This is a fairly common approach and nothing to particularly complain about, although you do need to look closely to ensure that the integration is suitably seamless.
Over all, Neolane does provide a broad set of marketing functions, although it may not be quite as strong as its major competitors in some areas of execution and analytics. Still, it’s a viable new choice in a market that has offered few alternatives in recent years. So for companies considering a new system, it’s definitely worth a look.
Monday, October 08, 2007
Proprietary Databases Rise Again
I’ve been noticing for some time that “proprietary” databases are making a come-back in the world of marketing systems. “Proprietary” is a loaded term that generally refers to anything other than the major relational databases: Oracle, SQL Server and DB2, plus some of the open source products like MySQL. In the marketing database world, proprietary systems have a long history tracing back to the mid-1980’s MCIF products from Customer Insight, OKRA Marketing, Harte-Hanks and others. These originally used specialized structures to get adequate performance from the limited PC hardware available in the mid-1980’s. Their spiritual descendants today are Alterian and BlueVenn (formerly SmartFocus), both with roots in the mid-1990’s Brann Viper system and both having reinvented themselves in the past few years as low cost / high performance options for service bureaus to offer their clients.
Nearly all the proprietary marketing databases used some version of an inverted (now more commonly called “columnar”) database structure. In such a structure, data for each field (e.g., Customer Name) is physically stored in adjacent blocks on the hard drive, so it can be accessed with a single read. This makes sense for marketing systems, and analytical queries in general, which typically scan all contents of a few fields. By contrast, most transaction processes use a key to find a particular record (row) and read all its elements. Standard relational databases are optimized for such transaction processing and thus store entire rows together on the hard drive, making it easy to retrieve their contents.
Columnar databases themselves date back at least to mid-1970’s products including Computer Corporation of America Model 204, Software AG ADABAS, and Applied Data Research (now CA) Datacom/DB. All of these are still available, incidentally. In an era when hardware was vastly more expensive, the great efficiency of these systems at analytical queries made them highly attractive. But as hardware costs fell and relational databases became increasingly dominant, they fell by the wayside except for a special situations. Their sweet spot of high-volume analytical applications was further invaded by massively parallel systems (Teradata and more recently Netezza) and multi-dimensional data cubes (Cognos Powerplay, Oracle/Hyperion EssBase, etc.). These had different strengths and weaknesses but still competed for some of the same business.
What’s interesting today is that a new generation of proprietary systems is appearing. Vertica has recently gained a great deal of attention due to the involvement of database pioneer Michael Stonebraker, architect of INGRES and POSTGRES. (Click here for an excellent technical analysis by the Winter Corporation; registration required.) QD Technology, launched last year (see my review), isn’t precisely a columnar structure, but uses indexes and compression in a similar fashion. I can’t prove it, but suspect the new interest in alternative approaches is because analytical databases are now getting so large—tens and hundreds of gigabytes—that the efficiency advantages of non-relational systems (which translate into cost savings) are now too great to ignore.
We’ll see where all this leads. One of the few columnar systems introduced in the 1990’s was Expressway (technically, a bit map index—not unlike Model 204), which was purchased by Sybase and is now moderately successful as Sybase IQ. I think Oracle also added some bit-map capabilities during this period, and suspect the other relational database vendors have their versions as well. If columnar approaches continue to gain strength, we can certainly expect the major database vendors to add them as options, even though they are literally orthogonal to standard relational database design. In the meantime, it’s fun to see some new options become available and to hope that costs will come down as new competitors enter the domain of very large analytical databases.
Nearly all the proprietary marketing databases used some version of an inverted (now more commonly called “columnar”) database structure. In such a structure, data for each field (e.g., Customer Name) is physically stored in adjacent blocks on the hard drive, so it can be accessed with a single read. This makes sense for marketing systems, and analytical queries in general, which typically scan all contents of a few fields. By contrast, most transaction processes use a key to find a particular record (row) and read all its elements. Standard relational databases are optimized for such transaction processing and thus store entire rows together on the hard drive, making it easy to retrieve their contents.
Columnar databases themselves date back at least to mid-1970’s products including Computer Corporation of America Model 204, Software AG ADABAS, and Applied Data Research (now CA) Datacom/DB. All of these are still available, incidentally. In an era when hardware was vastly more expensive, the great efficiency of these systems at analytical queries made them highly attractive. But as hardware costs fell and relational databases became increasingly dominant, they fell by the wayside except for a special situations. Their sweet spot of high-volume analytical applications was further invaded by massively parallel systems (Teradata and more recently Netezza) and multi-dimensional data cubes (Cognos Powerplay, Oracle/Hyperion EssBase, etc.). These had different strengths and weaknesses but still competed for some of the same business.
What’s interesting today is that a new generation of proprietary systems is appearing. Vertica has recently gained a great deal of attention due to the involvement of database pioneer Michael Stonebraker, architect of INGRES and POSTGRES. (Click here for an excellent technical analysis by the Winter Corporation; registration required.) QD Technology, launched last year (see my review), isn’t precisely a columnar structure, but uses indexes and compression in a similar fashion. I can’t prove it, but suspect the new interest in alternative approaches is because analytical databases are now getting so large—tens and hundreds of gigabytes—that the efficiency advantages of non-relational systems (which translate into cost savings) are now too great to ignore.
We’ll see where all this leads. One of the few columnar systems introduced in the 1990’s was Expressway (technically, a bit map index—not unlike Model 204), which was purchased by Sybase and is now moderately successful as Sybase IQ. I think Oracle also added some bit-map capabilities during this period, and suspect the other relational database vendors have their versions as well. If columnar approaches continue to gain strength, we can certainly expect the major database vendors to add them as options, even though they are literally orthogonal to standard relational database design. In the meantime, it’s fun to see some new options become available and to hope that costs will come down as new competitors enter the domain of very large analytical databases.
Friday, October 05, 2007
Analytica Provides Low-Cost, High-Quality Decision Models
My friends at DM News, which has published my Software Review column for the past fifteen years, unceremoniously informed me this week that they had decided to stop carrying all of their paid columnists, myself included. This caught me in the middle of preparing a review of Lumina Analytica, a pretty interesting piece of simulation modeling software. Lest my research go to waste, I’ll write about Analytica here.
Analytica falls into the general class of software used to build mathematical models of systems or processes, and to then predict the results of a particular set of inputs. Business people typically use such software to understand the expected results of projects such as a new product launch or a marketing campaign, or to forecast the performance of their business as a whole. They can also be used to model the lifecycle of a customer or to calculate the results of key performance indicators as linked on a strategy map, if the relationships among those indicators have been defined with sufficient rigor.
When the relationships between inputs and outputs are simple, such models can be built in a spreadsheet. But even moderately complex business problems are beyond what a spreadsheet can reasonably handle: they have too many inputs and outputs and the relationships among these are too complicated. Analytica makes it relatively easy to specify these relationships by drawing them on an “influence diagram” that looks like a typical flow chart. Objects within the chart, representing the different inputs and outputs, can then be opened up to specify the precise mathematical relationships among the elements.
Analytica can also build models that run over a number of time periods, using results from previous periods as inputs to later periods. You can do something like this in a spreadsheet, but it takes a great many hard-coded formulas which are easy to get wrong and hard to change. Analytica also offers a wealth of tools for dealing with uncertainties, such as many different types of probability distributions. These are virtually impossible to handle in a normal spreadsheet model.
Apart from spreadsheets, Analytica fits between several niches in the software world. Its influence diagrams resemble the pictures drawn by graphics software: but unlike simple drawing programs, Analytica has actual calculations beneath its flow charts. On the other hand, Analytica is less powerful than process modeling software used to simulate manufacturing systems, call center operations, or other business processes. That software has many very sophisticated features tailored to modeling such flows in detail: for example, ways to simulate random variations in the arrival rates of telephone calls or to incorporate the need for one process step to wait until several others have been completed. It may be possible to do much of this in Analytica, but it would probably be stretching the software beyond its natural limits.
What Analytica does well is model specific decisions or business results over time. The diagram-building approach to creating models is quite powerful and intuitive, particularly because users can build modules within their models, so a single object on a high-level diagram actually refers to a separate, detailed diagram of its own. Object attributes include inputs, outputs and formulas describing how the outputs are calculated. Objects can also contain arrays to handle different conditions: for example, a customer object might use arrays to define treatments for different customer segments. This is a very powerful feature, since its lets an apparently simple model capture a great deal of actual detail.
Setting up a model in Analytica isn’t exactly simple, although it may be about as easy as possible given the inherent complexity of the task. Basically, users place the objects on a palette, connect them with arrows, and then open them up to define the details. There are many options within these details, so it does take some effort to learn how to get what you want. The vendor provides a tutorial and detailed manual to help with the learning process, and offers a variety of training and consulting options. Although it is accessible to just about anyway, the system is definitely oriented towards sophisticated users, providing advanced statistical features and methods that no one else would understand.
The other intriguing feature of Analytica is its price. The basic product costs a delightfully reasonable $1,295. Other versions range up to $4,000 including ability to access ODBC data sources, handle very large arrays, and run automated optimization procedures. A server-based version costs $8,000, but only very large companies would need that one.
This pricing is quite impressive. Modeling systems can easily cost tens or hundreds of thousands of dollars, and it’s not clear they provide much more capability than Analytica. On the other hand, Analytica’s output presentation is rather limited—some basic tables and graphs, plus several statistical measures of uncertainty. There’s that statistical orientation again: as a non-statistician, I would have preferred better visualization of results.
In my own work, Analytica could definitely provide a tool for building models to simulate customers’ behaviors as they flow through an Experience Matrix. This is already more than a spreadsheet can handle, and although it could be done in QlikTech it would be a challenge. Similarly, Analytica could be used in business planning and simulation. It wouldn’t be as powerful as a true agent-based model, but could provide an alternative that costs less and is much easier to learn how to build. If you’re in the market for this sort of modeling—particularly if you want to model uncertainties and not just fixed inputs—Analytica is definitely worth a look.
Analytica falls into the general class of software used to build mathematical models of systems or processes, and to then predict the results of a particular set of inputs. Business people typically use such software to understand the expected results of projects such as a new product launch or a marketing campaign, or to forecast the performance of their business as a whole. They can also be used to model the lifecycle of a customer or to calculate the results of key performance indicators as linked on a strategy map, if the relationships among those indicators have been defined with sufficient rigor.
When the relationships between inputs and outputs are simple, such models can be built in a spreadsheet. But even moderately complex business problems are beyond what a spreadsheet can reasonably handle: they have too many inputs and outputs and the relationships among these are too complicated. Analytica makes it relatively easy to specify these relationships by drawing them on an “influence diagram” that looks like a typical flow chart. Objects within the chart, representing the different inputs and outputs, can then be opened up to specify the precise mathematical relationships among the elements.
Analytica can also build models that run over a number of time periods, using results from previous periods as inputs to later periods. You can do something like this in a spreadsheet, but it takes a great many hard-coded formulas which are easy to get wrong and hard to change. Analytica also offers a wealth of tools for dealing with uncertainties, such as many different types of probability distributions. These are virtually impossible to handle in a normal spreadsheet model.
Apart from spreadsheets, Analytica fits between several niches in the software world. Its influence diagrams resemble the pictures drawn by graphics software: but unlike simple drawing programs, Analytica has actual calculations beneath its flow charts. On the other hand, Analytica is less powerful than process modeling software used to simulate manufacturing systems, call center operations, or other business processes. That software has many very sophisticated features tailored to modeling such flows in detail: for example, ways to simulate random variations in the arrival rates of telephone calls or to incorporate the need for one process step to wait until several others have been completed. It may be possible to do much of this in Analytica, but it would probably be stretching the software beyond its natural limits.
What Analytica does well is model specific decisions or business results over time. The diagram-building approach to creating models is quite powerful and intuitive, particularly because users can build modules within their models, so a single object on a high-level diagram actually refers to a separate, detailed diagram of its own. Object attributes include inputs, outputs and formulas describing how the outputs are calculated. Objects can also contain arrays to handle different conditions: for example, a customer object might use arrays to define treatments for different customer segments. This is a very powerful feature, since its lets an apparently simple model capture a great deal of actual detail.
Setting up a model in Analytica isn’t exactly simple, although it may be about as easy as possible given the inherent complexity of the task. Basically, users place the objects on a palette, connect them with arrows, and then open them up to define the details. There are many options within these details, so it does take some effort to learn how to get what you want. The vendor provides a tutorial and detailed manual to help with the learning process, and offers a variety of training and consulting options. Although it is accessible to just about anyway, the system is definitely oriented towards sophisticated users, providing advanced statistical features and methods that no one else would understand.
The other intriguing feature of Analytica is its price. The basic product costs a delightfully reasonable $1,295. Other versions range up to $4,000 including ability to access ODBC data sources, handle very large arrays, and run automated optimization procedures. A server-based version costs $8,000, but only very large companies would need that one.
This pricing is quite impressive. Modeling systems can easily cost tens or hundreds of thousands of dollars, and it’s not clear they provide much more capability than Analytica. On the other hand, Analytica’s output presentation is rather limited—some basic tables and graphs, plus several statistical measures of uncertainty. There’s that statistical orientation again: as a non-statistician, I would have preferred better visualization of results.
In my own work, Analytica could definitely provide a tool for building models to simulate customers’ behaviors as they flow through an Experience Matrix. This is already more than a spreadsheet can handle, and although it could be done in QlikTech it would be a challenge. Similarly, Analytica could be used in business planning and simulation. It wouldn’t be as powerful as a true agent-based model, but could provide an alternative that costs less and is much easier to learn how to build. If you’re in the market for this sort of modeling—particularly if you want to model uncertainties and not just fixed inputs—Analytica is definitely worth a look.
Tuesday, October 02, 2007
Marketing Performance Measurement: No Answers to the Really Tough Questions
I recently ran a pair of two-day workshops on marketing performance measurement. My students had a variety of goals, but the two major ones they mentioned were the toughest issues in marketing: how to allocate resources across different channels and how to measure the impact of marketing on brand value.
Both questions have standard answers. Channel allocation is handled by marketing mix models, which analyze historical data to determine the relative impact of different types of spending. Brand value is measured by assessing the important customer attitudes in a given market and how a particular brand matches those attitudes.
Yet, despite my typically eloquent and detailed explanations, my students found these answers unsatisfactory. Cost was one obstacle for most of them; lack of data was another. They really wanted something simpler.
I’d love to report I gave it to them, but I couldn't. I had researched these topics thoroughly as preparation for the workshops and hadn’t found any alternatives to the standard approaches; further research since then still hasn’t turned up anything else of substance. Channel allocation and brand value are inherently complex and there just are no simple ways to measure them.
The best I could suggest was to use proxy data when a thorough analysis is not possible due to cost or data constraints. For channel allocation, the proxy might be incremental return on investment by channel: switching funds from low ROI to high ROI channels doesn’t really measure the impact of the change in marketing mix, but it should lead to an improvement in the average level of performance. Similarly, surveys to measure changes in customer attitudes toward a brand don’t yield a financial measure of brand value, but do show whether it is improving or getting worse. Some compromise is unavoidable here: companies not willing or able to invest in a rigorous solution must accept that their answers will be imprecise.
This round of answers was little better received than the first. Even ROI and customer attitudes are not always available, and they are particularly hard to measure in multi-channel environments where the result of a particular marketing effort cannot easily be isolated. You can try still simpler measures, such as spending or responses for channel performance or market share for brand value. But these are so far removed from the original question that it’s difficult to present them as meaningful answers.
The other approach I suggested was testing. The goal here is to manufacture data where none exists, thereby creating something to measure. This turned out to be a key concept throughout the performance measurement discussions. Testing also shows that marketers are at least doing something rigorous, thereby helping satisfy critics who feel marketing investments are totally arbitrary. Of course, this is a political rather than analytical approach, but politics are important. The final benefit of testing is it gives a platform for continuous improvement: even though you may not know the absolute value of any particular marketing effort, a test tells whether one option or another is relatively superior. Over time, this allows a measurable gain in results compared with the original levels. Eventually it may provide benchmarks to compare different marketing efforts against each other, helping with both channel allocation and brand value as well.
Even testing isn’t always possible, as my students were quick to point out. My answer at that point was simply that you have to seek situations where you can test: for example, Web efforts are often more measurable than conventional channels. Web results may not mirror results in other channels, because Web customers may themselves be very different from the rest of the world. But this again gets back to the issue of doing the best with the resources at hand: some information is better than none, so long as you keep in mind the limits of what you’re working with.
I also suggested that testing is more possible than marketers sometimes think, if they really make testing a priority. This means selecting channels in part on the basis of whether testing is possible; designing programs so testing is built in; and investing more heavily in test activities themselves (such as incentives for survey participants). This approach may ultimately lead to a bias in favor of testable channels—something that seems excessive at first: you wouldn’t want to discard an effective channel simply because you couldn’t test it. But it makes some sense if you realize that testable channels can be improved continuously, while results in untestable channels are likely to stagnate. Given this dynamic, testable channels will sooner or later become more productive than untestable channels. This holds even if the testable channels are less efficient at the start.
I offered all these considerations to my students, and may have seen a few lightbulbs switch on. It was hard to tell: by the time we had gotten this far into the discussion, everyone was fairly tired. But I think it’s ultimately the best advice I could have given them: focus on testing and measuring what you can, and make the best use possible of the resulting knowledge. It may not directly answer your immediate questions, but you will learn how to make the most effective use of your marketing resources, and that’s the goal you are ultimately pursuing.
Both questions have standard answers. Channel allocation is handled by marketing mix models, which analyze historical data to determine the relative impact of different types of spending. Brand value is measured by assessing the important customer attitudes in a given market and how a particular brand matches those attitudes.
Yet, despite my typically eloquent and detailed explanations, my students found these answers unsatisfactory. Cost was one obstacle for most of them; lack of data was another. They really wanted something simpler.
I’d love to report I gave it to them, but I couldn't. I had researched these topics thoroughly as preparation for the workshops and hadn’t found any alternatives to the standard approaches; further research since then still hasn’t turned up anything else of substance. Channel allocation and brand value are inherently complex and there just are no simple ways to measure them.
The best I could suggest was to use proxy data when a thorough analysis is not possible due to cost or data constraints. For channel allocation, the proxy might be incremental return on investment by channel: switching funds from low ROI to high ROI channels doesn’t really measure the impact of the change in marketing mix, but it should lead to an improvement in the average level of performance. Similarly, surveys to measure changes in customer attitudes toward a brand don’t yield a financial measure of brand value, but do show whether it is improving or getting worse. Some compromise is unavoidable here: companies not willing or able to invest in a rigorous solution must accept that their answers will be imprecise.
This round of answers was little better received than the first. Even ROI and customer attitudes are not always available, and they are particularly hard to measure in multi-channel environments where the result of a particular marketing effort cannot easily be isolated. You can try still simpler measures, such as spending or responses for channel performance or market share for brand value. But these are so far removed from the original question that it’s difficult to present them as meaningful answers.
The other approach I suggested was testing. The goal here is to manufacture data where none exists, thereby creating something to measure. This turned out to be a key concept throughout the performance measurement discussions. Testing also shows that marketers are at least doing something rigorous, thereby helping satisfy critics who feel marketing investments are totally arbitrary. Of course, this is a political rather than analytical approach, but politics are important. The final benefit of testing is it gives a platform for continuous improvement: even though you may not know the absolute value of any particular marketing effort, a test tells whether one option or another is relatively superior. Over time, this allows a measurable gain in results compared with the original levels. Eventually it may provide benchmarks to compare different marketing efforts against each other, helping with both channel allocation and brand value as well.
Even testing isn’t always possible, as my students were quick to point out. My answer at that point was simply that you have to seek situations where you can test: for example, Web efforts are often more measurable than conventional channels. Web results may not mirror results in other channels, because Web customers may themselves be very different from the rest of the world. But this again gets back to the issue of doing the best with the resources at hand: some information is better than none, so long as you keep in mind the limits of what you’re working with.
I also suggested that testing is more possible than marketers sometimes think, if they really make testing a priority. This means selecting channels in part on the basis of whether testing is possible; designing programs so testing is built in; and investing more heavily in test activities themselves (such as incentives for survey participants). This approach may ultimately lead to a bias in favor of testable channels—something that seems excessive at first: you wouldn’t want to discard an effective channel simply because you couldn’t test it. But it makes some sense if you realize that testable channels can be improved continuously, while results in untestable channels are likely to stagnate. Given this dynamic, testable channels will sooner or later become more productive than untestable channels. This holds even if the testable channels are less efficient at the start.
I offered all these considerations to my students, and may have seen a few lightbulbs switch on. It was hard to tell: by the time we had gotten this far into the discussion, everyone was fairly tired. But I think it’s ultimately the best advice I could have given them: focus on testing and measuring what you can, and make the best use possible of the resulting knowledge. It may not directly answer your immediate questions, but you will learn how to make the most effective use of your marketing resources, and that’s the goal you are ultimately pursuing.