After a few bon mots that probably no one else will find clever ("Usability is hard to measure; features are easy to count" "Small hard facts beat big blurry realities") I got to describing the steps in a usability-aware selection process:
- define business needs
- define processes to meet those needs
- define tasks within each process
- identify systems to consider, then, for each system:
- determine which users will do each task
- determine how much work each task will be
- compare, rank and summarize the results
As a point of comparison, it's steps 3, 5 and 6 that differ from the conventional selection process. Step 3 in a conventional process would identify features needed rather than tasks, while steps 5 and 6 would be replaced with research into system features.
What I realized as I was writing this was that the real focus is not on usability, but on defining processes and tasks. Usability measures are something of a by-product. In fact, the most natural way to implement this approach would be to score each system for each task, with a single score that incoporates both functionality and as ease of use. Indeed, as I wrote not long ago, standard definitions of usability include both these elements, so this is not exactly an original thought.
Still, it does mean I have to restructure the terms of the debate (at least, the one inside my head). It's not usability vs. features, but process vs. features. That is, I'm essentially arguing that selection processes should invest their effort in understanding the company business processes that the new system must support, and in particular in which the tasks different users will perform.
The good news here is that you'll eventually need to define, or maybe redefine, those processes, tasks and user roles for a successful implementation. So you're not doing more work, but simply doing the implementation work sooner. This means a process-focused evaluation approach ultimately reduces the total work involved, as well as reducing implementation time and improving the prospects for success. By contrast, time spent researching system features is pretty much a waste once the selection process is complete.
Of course, this does raise the question of whether the feature information assembled in the Raab Guide to Demand Generation Systems is really helpful. You won't be surprised to find I think it is. This is not so much because of the feature checklist (truly my least favorite section) but because the Guide tries to show how the features are organized, which directly impacts system usability. Plus, of course, the absence of a necessary feature makes a system unusable for that particular purpose, and that is the biggest usability hit of all. What the Guide really does is save readers the work of assembling all the feature information for themselves, thereby freeing them to focus on defining their own business processes, tasks and users.
In conclusion, you should all go and buy the Guide immediately.