Marketbright sees its most important differentiator as a sophisticated architecture designed to coordinate marketing activities throughout a large organization. This doesn't strike me as a very effective selling point: buying a product because of its architecture is the software equivalent of reading Playboy for the articles. (Do I get credit for resisting the temptation to link to Playboy.com?) What really matters are the features facilitated by this architecture. According to the company Web site, these include “full document repository and asset management, multi-currency budget planning and management and a range of integrated collaboration features”. Now that's something to get excited about. Hubba hubba, eh?
At least this clarifies which end of the demand generation market will find Marketbright most attractive. Indeed, Pilcher told me that product sells best to people who have worked in large organizations and seen first-hand what it takes to support collaboration within marketing. These people may currently be working in small firms, so Marketbright has ended up with customers of all sizes. Pricing ranges from $20,000 to $200,000 per year based on the modules and number of users, so the system is financially competitive facross the spectrum.
Not having seen the product, I don’t know whether its sophisticated management features come at the price of end-user complexity. This is a common trade-off. One hint of Marketbright’s approach may be that Pilcher recommends his clients build separate campaigns for different customer segments, rather than “boiling the ocean” by creating a single campaign with branches to handle all contingencies. This suggests that Marketbright has at least tried to keep things as simple.
Pilcher and I had a lengthy subsequent email discussion about usability, “violently agreeing” that it’s an important though elusive measure. My final conclusion was similar to the positions I’ve taken before: usability has to be measured separately for different functions, levels of campaign sophistication, and user skill sets. Where I may have changed my mind is a grudging agreement that it’s legitimate to summarize the details into simple measures that could be plotted in a graph. The obvious ones are usability and functionality scores. I still fear this could mislead by obscuring important information: for example, deep functionality in a few areas could generate the same score as limited functionality across many areas. (Pilcher proposed the number of channels as a separate dimension, but then a system with weak functionality in many channels scores better than a system that is strong in just a few. I consider that equally misleading.) But if a two-dimensional summary offers an attractive entry point which in turn leads to deeper exploration, it’s better than scaring people away by showing them the details at the start.