When I was researching last week’s piece on Market2Lead, one of the points that vendor stressed was their ability to create a full-scale marketing database with information from external sources to analyze campaign results. My understanding of competitive products was that they had similar capabilities, at least hypothetically, so I choose not to list that as a Market2Lead specialty.
But I recently spoke with on-demand business intelligence vendor LucidEra , who also said they had found that demand generation systems could not integrate such information. They even cited one demand generation vendor that had turned to them for help. (In fact, LucidEra is releasing a new module for lead conversion analysis today to address this very need. I plan to write more about LucidEra next week.)
Yet another source, Aberdeen Group’s recent study on Lead Prioritization and Scoring: The Path to Higher Conversion (free with registration, for a limited time) also showed that linking marketing campaigns to closed deals to be the least commonly available among all the key capabilities required for effective lead management. Just 64% of the best-in-class vendors had this capability, even though 92% had a lead management or demand generation system.
Quite frankly, these results baffle me, because every demand generation vendor I’ve spoken with has the ability to import data from sales automation systems. Perhaps I'm missed some limit on exactly what kind of data they can bring back. I’ll be researching this in more detail in the near future, so I’ll get to the bottom of it fairly soon.
In the meantime, the Aberdeen report provided some other interesting information. Certainly the overriding point was that technology can’t do the job by itself: business processes and organizational structures must be in place to ensure that marketing and sales work together. Of course, this is true about pretty much any technology, but it’s especially important with lead management because it crosses departmental boundaries. Unfortunately, this is also a rather boring, nagging, floss-your-teeth kind of point that isn’t much fun to discuss once you’re made it. So, having tipped our hat to process, let’s talk about technology instead.
I was particularly intrigued at what Aberdeen found about the relative deployment rates for different capabilities. The study suggests—at least to me; Aberdeen doesn’t quite put it this way—that companies tend to start by deploying a basic lead management platform, followed by lead nurturing programs, and then adding lead prioritization and scoring. These could all be done by the same system, so it’s less a matter of swapping software as you move through the stages than of making fuller use of the system.
If you accept this progression, then prioritization and scoring is at the leading edge of lead management sophistication. Indeed, it is the least common of the key technologies that Aberdeen lists, in place at just 77% of the best-in-class companies. (Although the 64% figure for linking campaigns to closed deals is lower, Aberdeen lists that under performance measurement, not technology.) Within lead scoring itself, Aberdeen reports that customer-provided information such as answers to survey questions are used more widely than inferred information such as Web site behavior. Aberdeen suggests that companies will add inferred data, and in general make their scoring models increasingly complex, as they grow in sophistication.
This view of inferred data in particular and scoring models in general as leading edge functions is important. Many of the demand management vendors I’ve spoken with are putting particular stress on these areas, both in terms of promoting their existing capabilities and of adding to them through enhancements. In doing this, they are probably responding to the demands of their most advanced customers—a natural enough reaction, and one that is laudably customer-driven. But there could also be a squeaky wheel problem here: vendors may be reacting to a vocal minority of existing customers, rather than a silent majority of prospects and less-advanced clients who have other needs. Weaknesses in campaign results reporting, external data integration and other analytics are one area of possible concern. General ease of use and customization could be another.
In a market that is still in its very early stages, the great majority of potential buyers are still quite unsophisticated. It would be a big mistake for vendors to engage in a typical features war, adding capabilities to please a few clients at the cost of adding complexity that makes the system harder for everyone else. Assuming that buyers can accurately assess their true needs—a big if; who isn’t impressed by bells and whistles?—adding too many features would harm the vendors own sales as well.
The Aberdeen report provides some tantalizing bits of data on this issue. It compares what buyers said was important during the technology assessment with what they decided was important after using the technology. But I’m not sure what is being reported: there are five entries in each group (the top five, perhaps?), of which only “customizable solution” appears in both. The other four listed for pre-purchase were: marketing maintained and operated; easy to use interface; integration with CRM; and reminders and event triggers. The other four for post-purchase were: Web analytics; lead scoring flexibility; list segmentation and targeting; and ability to automate complex models.
The question is how you interpret this. Did buyers change their minds about what mattered, or did their focus simply switch once they had a solution in place? I’d guess the latter. From a vendor perspective, of course, you want to emphasize features that will make the sale. Since ease of use ranks in the pre-purchase group, that would seem to favor simplicity. But you want happy customers too, which means providing the features they’ll need. So do you add the features and try to educate buyers about why they’re important? Or do you add them and hide them during the sales process? Or do you just not add them at all?
Would your answer change if I told you, Monty Hall style, that there is little difference between best-in-class companies and everyone else on the pre-sales considerations, but that customization and list segmentation were much less important to less sophisticated customers in the post-sales ranking?
In a way, this is a Hobson’s choice: you can’t not provide the features customers need to do their jobs, and you don’t want to them to start with you and switch to someone else. So the only question is whether you try to hide the complexity or expose it in all its glory. The latter would work for advanced buyers, but, at this stage in the market, those are few in number. So it seems to me that clever interface design—exposing just as many features as the customer needs at the moment--is the way to go.
But I recently spoke with on-demand business intelligence vendor LucidEra , who also said they had found that demand generation systems could not integrate such information. They even cited one demand generation vendor that had turned to them for help. (In fact, LucidEra is releasing a new module for lead conversion analysis today to address this very need. I plan to write more about LucidEra next week.)
Yet another source, Aberdeen Group’s recent study on Lead Prioritization and Scoring: The Path to Higher Conversion (free with registration, for a limited time) also showed that linking marketing campaigns to closed deals to be the least commonly available among all the key capabilities required for effective lead management. Just 64% of the best-in-class vendors had this capability, even though 92% had a lead management or demand generation system.
Quite frankly, these results baffle me, because every demand generation vendor I’ve spoken with has the ability to import data from sales automation systems. Perhaps I'm missed some limit on exactly what kind of data they can bring back. I’ll be researching this in more detail in the near future, so I’ll get to the bottom of it fairly soon.
In the meantime, the Aberdeen report provided some other interesting information. Certainly the overriding point was that technology can’t do the job by itself: business processes and organizational structures must be in place to ensure that marketing and sales work together. Of course, this is true about pretty much any technology, but it’s especially important with lead management because it crosses departmental boundaries. Unfortunately, this is also a rather boring, nagging, floss-your-teeth kind of point that isn’t much fun to discuss once you’re made it. So, having tipped our hat to process, let’s talk about technology instead.
I was particularly intrigued at what Aberdeen found about the relative deployment rates for different capabilities. The study suggests—at least to me; Aberdeen doesn’t quite put it this way—that companies tend to start by deploying a basic lead management platform, followed by lead nurturing programs, and then adding lead prioritization and scoring. These could all be done by the same system, so it’s less a matter of swapping software as you move through the stages than of making fuller use of the system.
If you accept this progression, then prioritization and scoring is at the leading edge of lead management sophistication. Indeed, it is the least common of the key technologies that Aberdeen lists, in place at just 77% of the best-in-class companies. (Although the 64% figure for linking campaigns to closed deals is lower, Aberdeen lists that under performance measurement, not technology.) Within lead scoring itself, Aberdeen reports that customer-provided information such as answers to survey questions are used more widely than inferred information such as Web site behavior. Aberdeen suggests that companies will add inferred data, and in general make their scoring models increasingly complex, as they grow in sophistication.
This view of inferred data in particular and scoring models in general as leading edge functions is important. Many of the demand management vendors I’ve spoken with are putting particular stress on these areas, both in terms of promoting their existing capabilities and of adding to them through enhancements. In doing this, they are probably responding to the demands of their most advanced customers—a natural enough reaction, and one that is laudably customer-driven. But there could also be a squeaky wheel problem here: vendors may be reacting to a vocal minority of existing customers, rather than a silent majority of prospects and less-advanced clients who have other needs. Weaknesses in campaign results reporting, external data integration and other analytics are one area of possible concern. General ease of use and customization could be another.
In a market that is still in its very early stages, the great majority of potential buyers are still quite unsophisticated. It would be a big mistake for vendors to engage in a typical features war, adding capabilities to please a few clients at the cost of adding complexity that makes the system harder for everyone else. Assuming that buyers can accurately assess their true needs—a big if; who isn’t impressed by bells and whistles?—adding too many features would harm the vendors own sales as well.
The Aberdeen report provides some tantalizing bits of data on this issue. It compares what buyers said was important during the technology assessment with what they decided was important after using the technology. But I’m not sure what is being reported: there are five entries in each group (the top five, perhaps?), of which only “customizable solution” appears in both. The other four listed for pre-purchase were: marketing maintained and operated; easy to use interface; integration with CRM; and reminders and event triggers. The other four for post-purchase were: Web analytics; lead scoring flexibility; list segmentation and targeting; and ability to automate complex models.
The question is how you interpret this. Did buyers change their minds about what mattered, or did their focus simply switch once they had a solution in place? I’d guess the latter. From a vendor perspective, of course, you want to emphasize features that will make the sale. Since ease of use ranks in the pre-purchase group, that would seem to favor simplicity. But you want happy customers too, which means providing the features they’ll need. So do you add the features and try to educate buyers about why they’re important? Or do you add them and hide them during the sales process? Or do you just not add them at all?
Would your answer change if I told you, Monty Hall style, that there is little difference between best-in-class companies and everyone else on the pre-sales considerations, but that customization and list segmentation were much less important to less sophisticated customers in the post-sales ranking?
In a way, this is a Hobson’s choice: you can’t not provide the features customers need to do their jobs, and you don’t want to them to start with you and switch to someone else. So the only question is whether you try to hide the complexity or expose it in all its glory. The latter would work for advanced buyers, but, at this stage in the market, those are few in number. So it seems to me that clever interface design—exposing just as many features as the customer needs at the moment--is the way to go.