Everybody wants to get the best results from their Web site, and plenty of vendors are willing to help. I’ve been trying to make sense of the different Web site optimization vendors, and have tentatively decided they fall into four groups:
- Web analytics. These do log file or page beacon analysis to track page views by visitors. Examples are Coremetrics, Omniture, WebSideStory, and WebTrends. They basically can tell you how visitors are moving through your site, but then it’s up to you to figure out what to do about it. So far as I know, they lack formal testing capabilities other than reporting on tests you might set up separately.
- multi-variate testing. These systems let users define a set of elements to test, build an efficient test matrix that tries them in different combinations, execute the tests, and report on the results. Examples are Google Website Optimizer, Offermatica, Optimost, SiteSpect and Vertster. These systems serve the test content into user-designated page slots, which lets them control what each visitor sees. Their reports estimate the independent and combined impact of different test elements, and may go so far as to recommend an optimal combination of components. But it’s up to the user to apply the test results to production systems. [Since writing this I've learned that at least some vendors can automatically deploy the winning combination. You'll need to check with the individual vendors for details.]
- discrete choice models. These resemble multi-variate testing but use a different mathematic approach. They present different combinations of test elements to users, observe their behavior, and create predictive models with weights for the different categories of variables. This provides a level of abstraction that is unavailable in the multi-variate testing results, although I haven’t quite decided whether this really matters. So far as I can tell, only one vendor, Memetrics, has built choice models into a Web site testing system. (Others including Fair Isaac and MarketingNPV offer discrete choice testing via Web surveys.) Like the multi-variate systems, Memetrics controls the actual Web site content served in the tests. It apparently does have the capability to move winning rules into production.
- behavioral targeting. These systems monitor visitor behavior and serve each person the content most likely to meet business objectives, such as sales or conversions. Vendors include Certona, Kefta, Touch Clarity, and [x+1]; vendors with similar technology for ad serving include Accipiter, MediaPlex, and RevenueScience. These systems automatically build predictive models that select the most productive content for each visitor, refine the models as new results accumulate, and serve the recommended content. However, they test each component independently and can only test offers. This means they cannot answer questions about combinations of, say, headline and pricing, or which color or page layout is best.
Clearly these are very disparate tools. I’m listing them together because they all aim to help companies improve results from their Web sites, and all thus compete for the attention and budgets of marketers who must decide which projects to tackle first. I don’t know whether there’s a logical sequence in which they should be employed or some way to make them all work together. But clarifying the differences among them is a first step to making those judgments.
Great start at sorting out the differences between optimization vendors.
ReplyDeleteA couple points of clarification;
-Memetrics xOs platform does productionize or publish the optimal rules it uncovers.
-A discrete choice framework helps drive both the inputs and outputs of optimization activities. In terms of inputs, we focus on optimizing the decision making process across key customer choices (regardless of channel). In terms of outputs, choice models power optimization for different types of customer segments, multiple objectives and financial considerations related to campaign costs and expected customer values.
We are committed to helping demystify the optimization space and support informed decision making in the market. Please let us know if we can help you further in your research by giving you a demonstration of the platform.
Best,
Hikaru Phillips
CEO
Memetrics INC
Thanks Hiraku. Sorry about the errors. I've corrected the original post and will write a bit more about this tomorrow too.
ReplyDelete- David
Excellent post David!
ReplyDeleteFirst the disclosure, I'm the VP of Marketing Services for Kefta... my team manages client interactions and designs our Dynamic Targeting scenarios. As well, thank you for placing us in the "behavioral targeting" section... this aptly describes our positioning.
I would like to add a couple of points regarding your statement about behavioral targeting vendors only being able to test elements separately and only register offers, I believe you identifying two key dimensions: 1) areas of influence in the visitor experience that a vendor can affect, 2) availability of testing technology.
To the first point - For several years, we’ve deployed campaigns that simultaneously personalize and test numerous page elements (headlines, layout, CTA, etc.), as well as even other dimensions: off-site elements like banners and emails, extra-site elements like layers and pop ups, and offline elements like sales calls. Yes, this is more complex to create and the technology necessary is more sophisticated.
To the second point – We have conducted MVT (multivariate tests) for several years, but early on we found that segmentation of visitors is a primary concern and that testing can only provide long term, fruitful answers within a relatively homogeneous group of people. Without this targeting effort, testing only identifies a better set of page elements for a specific period in time. Once the average profile of visitor types changes, any gains are lost.
In this statement, there are two problems identified in the case of conducting testing without targeting: 1) since the needs and expectations of site visitors are very different, this effort WILL sub optimize the experience and your results for groups of visitors, that is, you are only creating improvements for the larger group, at the expense of smaller groups; 2) results from this effort will only last until the makeup of your visitor groups changes – for example, if you have two times as many quality shoppers as price sensitive shoppers, MVT would indicate a site designed around the needs of the quality shopper, but if you later acquire more price sensitive shoppers, say from comparison shopping sites, and the balance of quantity of shoppers shifts to those being price sensitive, the new page your test said would be best is no longer appropriate.
Happy Holidays!
Mark
VP Marketing Services
Kefta
Hi Mark,
ReplyDeleteThanks for disclosing your connection with Kefta. Much appreciated.
Thanks also for the clarification regarding Kefta's capabilities. It seems that you straddle the behavioral and MVT groups. I'll be interested to take a closer look at your details. Perhaps I'll also try to come up with stricter definitions of the different categories--as I wrote yesterday, the classifications were tentative.
You make an excellent point regarding segmentation. The MVT vendors I've looked at do support segmentation within tests, no doubt to greater and lesser degrees depending on the product.