I’ve continued to refine my checklist of items for scoring demand generation vendors on "ease of use for basic functions". Results are promising in that my draft rankings agree with my intuitive sense of where different vendors fall on the continuum. Of course, actually publishing the rankings will throw some vendor noses out of joint, so I need to think a bit more how to do it so that everything is as transparent and reasonable as possible. I’m hoping to release something towards the end of this week, but won’t make any promises.
In preparation, I wanted to share a general look at the options I have available for building usability rankings. This should help clarify why I’ve chosen the path I’m following.
First, let’s set some criteria. A suitable scoring method has to be economically feasible, reasonably objective, easily explained, and not subject to vendor manipulation. Economics is probably the most critical stumbling block: although I’d be delighted to run each product through formal testing in a usability lab or do a massive industry-wide survey, these would be impossibly expensive. Being objective and explicable are less restrictive goals; mostly, they rule out me just arbitrarily assigning the scores without explanation. But I wouldn’t want to do that anyway. Avoiding vendor manipulation mostly applies to surveys: if I just took an open poll on a Website, there is a danger of vendors trying to “stuff the ballot box” in different ways. So I won’t go there.
As best I can figure, those constraints leave me with three primary options:
1. Controlled user surveys. I could ask the vendors to let me survey their client base or selected segments within that base. But it would be really hard to ensure that the vendors were not somehow influencing the results. Even if that weren’t a concern, it would still be difficult to design a reliable survey that puts the different vendors on the same footing. Just asking users to rate the systems for “ease of use” surely would not work, because people with different skill levels would give inconsistent answers. Asking how long it took to learn the system or build their first campaigns, or how long it takes to build an average campaign or to perform a specific task would face similar problems plus the additional unreliability of informal time estimates. In short, I just can’t see a way to build and execute a reliable survey to address the issue.
2. Time all vendors against a standard scenario. This would require defining all the components that go into a simple campaign, and then asking each vendor to build the campaign while I watch. Mostly I’d be timing how long it took to complete the process, although I suppose I’d also be taking notes about what looks hard or easy. You might object that the vendors would provide expert users for their systems, but that’s okay because clients also become expert users over time. There are some other issues having to do with set-up time vs. completion time—that is, how much should vendors be allowed to set up in advance? But I think those issues could be addressed. My main concern is whether the vendors would be willing to invest the hour or two it would take to complete a test like this (not to mention whether I can find the dozen or two hours needed to watch them all and prepare the results). I actually do like this approach, so if the vendors reading this tell me that they’re willing, I’ll probably give it a go.
3. Build a checklist of ease-of-use functions. This involves defining specific features that make a system easy to use for simple programs, and then determining which vendors provide those features. The challenge here is selecting the features, since people will disagree about what is hard or easy. But I’m actually pretty comfortable with the list I’ve developed, because there do seem to be some pretty clear trade-offs between making it easy to do simple things or complicated things. The advantage of this method is that once you’ve settled on the checklist, the actual vendor ratings are quite objective and easily explained. Plus it’s no small bonus that I’ve gathered most of the information already as part of my other vendor research. This means I can deliver the rankings fairly quickly and with minimal additional effort by the vendors or myself.
So those are my options. I'm not trying to convince you that approach number 3 is the “right” choice, but simply to show that I’ve considered several possibilities and number 3 seems to be the most practical solution available. Let me stress right here that I intend to produce two ease of use measures, one for simple programs and another for complex programs. This is very important because of the trade-offs I just mentioned. Having two measures will force marketers to ask themselves which one applies to them, and therefore to recognize that there is no single right answer for everyone. I wish I could say this point is so obvious that I needn't make it, but it's ignored more often than anyone would care to admit.
Of course, even two measures can’t capture the actual match between different vendors’ capabilities and each company’s particular requirements. There is truly no substitute for identifying your own needs and assessing the vendors directly against them. All I can hope to do with the generic ratings is to help buyers select the few products that are most likely to fit their needs. Narrowing the field early in the process will give marketers more time to look at the remaining contenders in more depth.
Monday, February 23, 2009
Subscribe to:
Post Comments (Atom)
4 comments:
Great topic, near and dear to my heart.
Option 2 sounds cool. It allows each vendor to present their best practice methods. The difference in methodologies will help illustrate who their target audience is (e.g. Enterprise, SMB, etc). You also might consider having different types of campaigns, like Scoring, Nurturing, Data Processing. I imagine different vendors will vary substantively on how they achieve each of these.
In terms of usability, one thing pops to mind. Task completion rates don't tell the whole story. I was at Intuit a few years ago and we noticed a real difference between satisfaction scores and task completion. One might love the application and still fail to complete their tasks, or vice-versa. Obviously, they affect each other. In the end, both an emotional connection AND high task-completion are the recipes for success. Monitoring the cross-section of satisfaction scores (Would you recommend this to a friend?) and task completion rates/times might give a fuller picture.
Lastly, I thought it might be interesting to also measure different cross-sections like:
1. How long does it take the first time you do X?
2. Do you need to repeat X many times? How difficult is that?
3. How easy is it to make mistakes? How easy is it to recover from mistakes?
There are so many questions and so many angles. It's great that you are doing the hard work of boiling it all down. Keep it up!
Thanks Glen. You've hit the nail on the head: usability is a very complicated, multi-dimensional problem that I'm trying to simplify. The risk is I'll over-simplify it and do more harm than good. But buyers will seek out simple answers whether they come from me or someone else, so I'm hoping to at least address the issue as responsibly as possible.
Incidentally, I'm looking forward to seeing the new Marketo interface, which I'm sure you had much to do with. Talking to Phil later today.
Mr.Raab
Thanks for a great article! I am wondering are there any companies or organizations out there that can do an independent assessment of the user friendliness of a specific software package
Thanks
Terry Morris
Hi Terry,
I'm not aware of anyone who specializes in that, but it would be part of any assessment done by a consultant who helps to select systems...such as Raab Associates.
Post a Comment