For better or worse, I've taken my usability scoring project as far as I can. So, having run out of reasons to delay, let me present the results.
To start back at the beginning, the objective has been to find a practical, objective way to evaluate the usability of different demand generation systems. Lacking the resources for a detailed user survey or hands-on scenario testing, I chose to build a checklist of functions that I believe correlate with usability.
But I quickly ran into a problem. Because usability includes both ease-of-use and the suitability for a given task, a single usability score would be misleading because it must either favor construction of simple campaigns or of complex campaigns. I therefore chose to develop separate scores for each. The idea was to remind people that the question of which system is "best" has no simple answer precisely because different systems are good at different things. My hope is that people would then take the next logical step of assessing systems against their own requirements.
In working through the checklist items themselves, I quickly realized that some items apply to usability for both simple and complex campaigns. I therefore ended up with three groups of scoring elements: those for simple campaigns, those for complex campaigns, and those shared by both.
The final piece of background is my definition of simple vs. complex campaigns. In both cases, I see the basic flow as an outbound email, landing page, multi-step nurturing campaign, lead scoring, and transfer to a CRM system for sales followup. A simple campaign would do this for a single product, offer, customer segment and region, while a complex campaign could involve several of each. Obviously these aren't very specific scenarios, but I think the ability to efficiently deliver many different treatments to different customers is ultimately what separates simple from complex in a demand generation context.
On to the checklist items themselves. I tried to find items that could be judged objectively as present or not, without too much evaluation on my part of how well or poorly they had been implemented. This turned out to be reasonably easy and has the major benefit that I can assign a 1 or 0 in nearly all cases. The only exceptions were a couple of cases where it seemed most fair to give a vendor half-credit.
The harder part was deciding which items correlated with usability. Here I considered the functions needed to execute simple and complex campaigns, and focused on functions that made those campaigns easy to set up and run.
This means I excluded functions I consider important but not themselves directly related to ease of use. Or, more precisely, I excluded functions that are more or less equally easy to use in the different products. For example, every demand generation system provides an editor to create emails. But these are generally so similar that they don't really factor into differences in system usability.
Where complex campaigns are concerned, this approach also means I excluded functions having to do with the scope or sophistication of a system rather than bearing directly on ease of use. As I discussed in one of the earlier posts laying out this project, my final scoring system will include separate sophistication scores as well. Similarly, a final scoring system should include other items such as vendor viability, or at least a proxies such as years in business, numbers of clients, and funding. Once more, the logic is to provide enough different scores that people are led to consider which of those scores are important to their own business.
Since the actual scores for most items were 1 or 0, I simply added up the item scores to get composite scores for each vendor. I considered weighting different items, but it didn't appear that any reasonable set of weights would have much impact on the relative rankings of the different vendors. So it seemed best not to bother.
Okay then. Let's look at the items I've chosen and how I've scored the vendors listed in the Raab Guide to Demand Generation Systems. (Five of these were in the original Guide. Marketbright was added on March 3, and Neolane will be added in a week or two once we finalize their entry.)
Rather than overwhelm everyone with a single, huge blog post, I'm going to break this into three parts. This post will cover the shared items. The next will cover simple campaign items, and a third will cover complex campaign items. A final post will summarize and discuss the results.
One final caveat on the scores: they're based on my best information about the vendors, but it's possible something has changed or I missed something in my research. I expect the vendors will let me know if they have questions, and will certainly adjust the scores if appropriate.
Shared Items
These items apply to both simple and complex campaigns. They relate primarily to creation of marketing assets such as emails and landing pages, and to execution of lead scoring.
Select marketing assets from shared libraries. Users can draw on existing libraries of marketing assets when setting up a campaign, rather than creating them from scratch. These assets can be modified or used as is. Even though every system listed below can do this, it's included because some other products might not.
Select marketing assets from shared libraries | ||||||
Eloqua | Manticore Technology | Market2Lead | Marketbright | Marketo | Neolane | Silverpop Engage B2B |
1 | 1 | 1 | 1 | 1 | 1 | 1 |
Text search for assets. Users can enter a search string, such as a word or phrase, and get a list of all assets having that string in their name. This makes it easier to find specific assets without keeping track of their exact name or which campaigns used them previously. Again, although every system on this list has this capability, systems not listed might not.
Text search for assets | ||||||
Eloqua | Manticore Technology | Market2Lead | Marketbright | Marketo | Neolane | Silverpop Engage B2B |
1 | 1 | 1 | 1 | 1 | 1 | 1 |
Share marketing assets across campaigns. The same asset can be used in multiple campaigns without creating a new copy. This saves effort if an asset must be updated.
Share marketing assets across campaigns | ||||||
Eloqua | Manticore Technology | Market2Lead | Marketbright | Marketo | Neolane | Silverpop Engage B2B |
1 | 1 | 1 | 0 | 1 | 1 | 1 |
Live templates for asset frames. Assets are built in layered templates with common elements such as headers, footers and styles. The bodies of these assets may be different. The templates are "live" in the sense that a change to the template is applied to all assets using that template, even if the assets are already deployed to a campaign. Like shared assets, this saves effort if a common element must be updated. Silverpop gets a half point because it can share templates for Web forms, but not emails.
Live templates for asset frames | ||||||
Eloqua | Manticore Technology | Market2Lead | Marketbright | Marketo | Neolane | Silverpop Engage B2B |
1 | 1 | 1 | 0 | 1 | 1 | .5 |
Trigger lead scoring outside a campaign step. The system will update lead scores without users building explicit steps into their campaigns to trigger this update. This simplifies campaign creation and ensures that scores are always current. Typically, scores are updated automatically after a data change. Sometimes, they are updated on a regular schedule instead.
Trigger lead scoring outside a campaign step | ||||||
Eloqua | Manticore Technology | Market2Lead | Marketbright | Marketo | Neolane | Silverpop Engage B2B |
1 | 1 | 1 | 1 | 1 | 1 | 1 |
Central definition of lead scoring rules. Scoring rules are defined in a central location rather than separately for each campaign. This saves effort and ensures consistency.
Central definition of lead scoring rules | ||||||
Eloqua | Manticore Technology | Market2Lead | Marketbright | Marketo | Neolane | Silverpop Engage B2B |
1 | 1 | 1 | 1 | 1 | 1 | 1 |
Total Score for Shared Items
Total Score for Shared Items | ||||||
Eloqua | Manticore Technology | Market2Lead | Marketbright | Marketo | Neolane | Silverpop Engage B2B |
6 | 6 | 6 | 4 | 6 | 6 | 5.5 |
The next post will look at scores for items specific to simple campaigns.
0 comments:
Post a Comment