Summary: Yesterday's post described the objectives of my product fit scores for B2B marketing automation vendors and how I set up the original weighting for individual elements. But the original set of scores seemed to favor more complex products, even for small business marketers. Here's how I addressed the problem.
Having decided that my weights needed adjusting, I wanted an independent assessment of which features were most appropriate for each type of buyer. I decided I could base this on the features each set of vendors provided. The only necessary assumption is that vendors offer the features that their target buyers need most. That seems like a reasonable premise -- or at least, more reliable than just applying my own opinions.
For this analysis, I first calculated the average score for each feature in each vendor group. Remember that I was working with a matrix of 150+ features for each vendor, each scored from 0 to 2 (0=not provided, 1=partly provided, 2=fully provided). A higher average means that more vendors provide the feature.
I then sorted the feature list based on average scores for the small business vendors. This put the least common small business features at the top and the most common at the bottom. I divided the list into six roughly-equal sized segments, representing feature groups that ranged from rare to very common. The final two segments both contained features shared by all small business vendors. One segment had features that were also shared by all big business vendors; the other had features that big business vendors didn't share. Finally, I calculated an average score for the big business vendors for each of the six groups.
What I found, not surprisingly, was that some features are more common in big-company systems, some are in all types of systems, and a few are concentrated among small-company systems. In each group, the intermediate vendors (mixed and special) had scores between the small and large vendor scores. This is additional confirmation that the groupings reflect a realistic ranking by buyer needs (or, at least, the vendors’ collective judgment of those needs).
The next step was to see whether my judgment matched the vendors’. Using the same feature groups, I calculated the aggregate weights I had already assigned to the those features for each buyer type. Sure enough, the big business features had the highest weights in the big business set, and the small business weights got relatively larger as you moved towards the small business features. The mid-size weights were somewhere in between, exactly where they should have been. Hooray for me!
Self-congratulation aside, we now have firmer ground for adjusting the weights to distinguish systems for different types of buyers. Remember, the small business scores in particular weren’t very different for the different vendor groups, and actually gave higher scores to big business vendors once you removed the adjustment for price. (As you may have guessed, most features in the “more small” group are price-related – proving, as if proof were necessary, that small businesses are very price sensitive.)
From here, the technical solution here is quite obvious: assign negative weights to big business features in the small business weight set. This recognizes that unnecessary features actually reduce the value of a system by making it harder to use. The caveat is that different users need different features. But that's why we have different weight sets in the first place.
(As an aside, it’s worth exploring why only assigning lower weights to the unnecessary features won’t suffice. Start with the fact that even a low weight increases rather than reduces a product score, so products with more features will always have a higher total. This is a fundamental problem with many feature-based scoring systems. In theory, assigning higher weights to other, more relevant factors might overcome this, but only if those features are more common among the simpler systems. In practice, most of the reassigned points will go to basic features which are present in all systems. This means the advanced systems get points for all the simple features plus the advanced features, while simple systems get points for the simple features only. So the advanced systems still win. That's just what happened with my original scores.)
Fortified with this evidence, I revisited my small business scoring and applied negative weights to items I felt were important only to large businesses. I applied similar but less severe adjustments to the mid-size weight set. The mid-size weights were in some ways a harder set of choices, since some big-company features do add value for mid-size firms. Although I worked without looking at the feature groups, the negative scores were indeed concentrated among the features in the large business groups:
I used the adjusted weights to create new product fit scores. These now show much more reasonable relationships across the vendor groups: that is, each vendor group has the highest scores for its primary buyer type and there’s a big difference between small and big business vendors. Hooray for me, again.
One caveat is that negative scores mean that weights in each set no longer add to 100%. This means that scores from different weight sets (i.e., reading down the chart) are no longer directly comparable. There are technical ways to solve this, but it's not worth the trouble for this particular project.
Tomorrow I'll describe the vendor fit scores. Mercifully, they are much simpler.
Tuesday, 28 December 2010
Ranking B2B Marketing Automation Vendors: Part 2
Posted on 16:33 by Unknown
Subscribe to:
Post Comments (Atom)
0 comments:
Post a Comment