Tuesday’s post and subsequent discussion of whether SiteSpect’s no-tag approach to Web site testing is significantly easier than inserting Javascript tags has been interesting but, for me at least, inconclusive. I understand that inserting tags into a production page requires the same testing as any other change, and that SiteSpect avoids this. But the tags are only inserted once, either per slot on a given page or for the page as a whole. After this, any number of tests can be set up and run on that page without additional changes. And given the simplicity of the tags themselves, are they unlikely to cause problems that take a lot of work to fix.
Of course, no work is easier than a little work, so avoiding tags does have some benefit. But most of the labor will still be in setting up the tests themselves. So the efficiency of the set up procedure will have much more impact on the total effort required to run a Web testing system than whether or not it uses tags. I’ve now seen demonstrations of all the major systems—Offermatica, Memetrics, Kefta, Optimost and SiteSpect—and written reviews of the first three (posted in my article archive). But even that doesn’t give me enough information to say one is easier to work with than another.
This is a fundamental issue with any kind of software assessment. You can talk to vendors, look at demonstrations, compare function lists, and read as many reviews as you like, but none of that shows what it’s like to use a product for your particular projects. Certainly with the Web testing systems, the different ways that clients configure their Web sites will have a major impact on whether a particular product is hard or easy to use. Deployment effort will also depend on what other systems are part of the site, as well as the nature of the desired tests themselves.
This line of reasoning leads mostly towards insisting that users should run their own tests before buying anything. That’s certainly sound advice: nobody ever regretted testing a product too thoroughly. But testing only works if you understand what you’re doing. Buyers who have never worked with a particular type of system often won’t know enough to run a meaningful test. So simply to proclaim that testing is always the solution isn’t correct.
This is where vendors can help. The more realistic a simulation they can provide of using their product, the more intelligently customers can judge whether the product will work for them. The reality is that most customers’ needs can be met by more than one product. Even though customers rightly want to find the best solution, all they really need is to find one that’s adequate and get on with their business. The first vendor to prove they can do the job, wins.
Products that claim a unique and substantial advantage over competitors, like SiteSpect, face a tougher challenge. Basically, no one believes it when vendors say their product is better, simply because all vendors say that. So vendors making radical claims must work hard to prove their case through explanations, benchmarks, case studies, worksheets, and whatever else it might take to show that the differences (a) really exist and (b) really matter. In theory, head-to-head comparisons against other vendors are the best way to do this, but the obvious bias of vendor-sponsored comparisons (not to mention potential for lawsuits) makes this extremely difficult. The best such vendors can do is to state their claims clearly and with as much justification as possible, and hope they can convince potential buyers to take a closer look.
Thursday, March 15, 2007
Subscribe to:
Post Comments (Atom)
1 comment:
Having used multiple A/B multivariable/multivariate testing solutions, I can confidently say Sitespect is better for a few reasons:
The top tag-based solutions charge by the number of tag calls you make even if you do not use those tags. Most larger sites will require a release to load the tags onto the area of the site you want to test. Inevitably, you forget to do the release to remove the tags and end up paying for them.
Tag-based solutions often require third-party cookies, which safari does not allow by default. You lose the ability to test a large percentage of traffic.
Tag-based solutions have imprecise reporting because the js doesn't always fire. Sitespect is more accurate for the same reason that log-based analytics are.
Sitespect allows you to test a variation across all pages regardless of where it is on the site e.g. change the word "black" to "white" everywhere. Tag based solutions could never do this.
Post a Comment