I’ve continued to refine my checklist of items for scoring demand generation vendors on "ease of use for basic functions". Results are promising in that my draft rankings agree with my intuitive sense of where different vendors fall on the continuum. Of course, actually publishing the rankings will throw some vendor noses out of joint, so I need to think a bit more how to do it so that everything is as transparent and reasonable as possible. I’m hoping to release something towards the end of this week, but won’t make any promises.
In preparation, I wanted to share a general look at the options I have available for building usability rankings. This should help clarify why I’ve chosen the path I’m following.
First, let’s set some criteria. A suitable scoring method has to be economically feasible, reasonably objective, easily explained, and not subject to vendor manipulation. Economics is probably the most critical stumbling block: although I’d be delighted to run each product through formal testing in a usability lab or do a massive industry-wide survey, these would be impossibly expensive. Being objective and explicable are less restrictive goals; mostly, they rule out me just arbitrarily assigning the scores without explanation. But I wouldn’t want to do that anyway. Avoiding vendor manipulation mostly applies to surveys: if I just took an open poll on a Website, there is a danger of vendors trying to “stuff the ballot box” in different ways. So I won’t go there.
As best I can figure, those constraints leave me with three primary options:
1. Controlled user surveys. I could ask the vendors to let me survey their client base or selected segments within that base. But it would be really hard to ensure that the vendors were not somehow influencing the results. Even if that weren’t a concern, it would still be difficult to design a reliable survey that puts the different vendors on the same footing. Just asking users to rate the systems for “ease of use” surely would not work, because people with different skill levels would give inconsistent answers. Asking how long it took to learn the system or build their first campaigns, or how long it takes to build an average campaign or to perform a specific task would face similar problems plus the additional unreliability of informal time estimates. In short, I just can’t see a way to build and execute a reliable survey to address the issue.
2. Time all vendors against a standard scenario. This would require defining all the components that go into a simple campaign, and then asking each vendor to build the campaign while I watch. Mostly I’d be timing how long it took to complete the process, although I suppose I’d also be taking notes about what looks hard or easy. You might object that the vendors would provide expert users for their systems, but that’s okay because clients also become expert users over time. There are some other issues having to do with set-up time vs. completion time—that is, how much should vendors be allowed to set up in advance? But I think those issues could be addressed. My main concern is whether the vendors would be willing to invest the hour or two it would take to complete a test like this (not to mention whether I can find the dozen or two hours needed to watch them all and prepare the results). I actually do like this approach, so if the vendors reading this tell me that they’re willing, I’ll probably give it a go.
3. Build a checklist of ease-of-use functions. This involves defining specific features that make a system easy to use for simple programs, and then determining which vendors provide those features. The challenge here is selecting the features, since people will disagree about what is hard or easy. But I’m actually pretty comfortable with the list I’ve developed, because there do seem to be some pretty clear trade-offs between making it easy to do simple things or complicated things. The advantage of this method is that once you’ve settled on the checklist, the actual vendor ratings are quite objective and easily explained. Plus it’s no small bonus that I’ve gathered most of the information already as part of my other vendor research. This means I can deliver the rankings fairly quickly and with minimal additional effort by the vendors or myself.
So those are my options. I'm not trying to convince you that approach number 3 is the “right” choice, but simply to show that I’ve considered several possibilities and number 3 seems to be the most practical solution available. Let me stress right here that I intend to produce two ease of use measures, one for simple programs and another for complex programs. This is very important because of the trade-offs I just mentioned. Having two measures will force marketers to ask themselves which one applies to them, and therefore to recognize that there is no single right answer for everyone. I wish I could say this point is so obvious that I needn't make it, but it's ignored more often than anyone would care to admit.
Of course, even two measures can’t capture the actual match between different vendors’ capabilities and each company’s particular requirements. There is truly no substitute for identifying your own needs and assessing the vendors directly against them. All I can hope to do with the generic ratings is to help buyers select the few products that are most likely to fit their needs. Narrowing the field early in the process will give marketers more time to look at the remaining contenders in more depth.
Monday, 23 February 2009
Three Options for Measuring Software Ease of Use
Posted on 20:28 by Unknown
Subscribe to:
Post Comments (Atom)
0 comments:
Post a Comment