I’m spending more time on airplanes these days, which means more time browsing airport bookshops. Since spy stories and soft core porn are neither to my taste, the pickings are pretty slim. But I did recently stumble across Jonah Lehrer’s How We Decide, one of several recent books that explain the latest scientific research into human decision-making.
Lehrer’s book shuttles between commonly-known irrationalities in human behavior – things like assigning a higher value to avoiding loss than achieving gain – and the less known (to me, at least) brain mechanisms that drive them. He makes a few key points, including the importance of non-conscious learning to drive everyday decisions (it turns out that people who can only make conscious, rational decisions are pretty much incapable of functioning), the powerful influence of irrelevant facts (for example, being exposed to a random number influences the price you’re willing to pay for an unrelated object), and the need to suppress emotion when faced with a truly unprecedented problem (because your previous experience is irrelevant).
These are all relevant to marketing, since they give powerful insights into ways to get people to do things. Indeed, it’s frightening to recognize how much this research can help people manipulate others to act against their interests. But good marketers, politicians, and poker players have always used these methods intuitively, so exposing them may not really make the world a more dangerous place.
In any event, my own dopamine receptors were most excited by research related to formal decision making, such as picking a new car, new house, or strawberry jam. Selecting software (or marketing approaches) falls into the same category. Apparently the research shows that carefully analyzing such choices actually leads to worse decisions than making a less considered judgment. The mechanism seems to be that people consider every factor they list, even the ones that are unimportant or totally irrelevant.
It's not that snap judgments are inherently better. The most effective approach is to gather all the data but then let your mind work on it subconsciously – what we normal folks call “mulling things over” – since the emotional parts of the brain are better at balancing the different factors than the rational brain. (I’m being horribly imprecise with terms like “emotional” and “rational”, which are shorthand for different processes in different brain regions. Apologies to Lehrer.)
As someone who has spent many years preparing detailed vendor analyses, I found this intriguing if unwelcome news. Since one main point of the book is that people rationalize opinions they’ve formed in advance, I’m quite aware that “deciding” whether to accept this view is not an objective process. But I also know that first impressions, at least where software is concerned, can’t possibly uncover all the important facts about a product. So the lesson I’m taking is the need to defer judgment until all factors have been identified and then to carefully and formally weight them so the irrelevant ones don’t distort the final choice.
As it happens, that sort of weighting is exactly what I’ve always insisted is important in making a sound selection. My process has been to have clients first list the items to consider and then assign them weights that add to 100%. This forces trade-offs to decide what’s most important. The next step is to score each vendor on each item. I always score one item at a time across all vendors, since the scores are inherently relative. Finally, I use the weights to build a single composite score for vendor ranking.
In theory, the weighting reduces the impact of unimportant factors, setting the weights separately from the scoring avoids weights that favor a particular vendor, and calculating composite scores prevents undue influence by the first or last item reviewed. Whether things work as well as I’d like to believe, I can’t really say. But I can report three common patterns that seem relevant.
- the final winner often differs from one I originally expected. This is the “horse race” aspect of the process and I think it means we’re successfully avoiding being stuck with premature conclusions.
- when the composite scores don’t match intuitive expectations, there’s usually a problem with the weights. I interpret this to mean that we’re listening to the emotional part of the brain and taking advantage of its insights.
- as scoring proceeds, one vendor often emerges as the consistent winner, essentially “building momentum” as we move towards a conclusion. I’ve always enjoyed this, since it makes for an easy final decision. But now I’m wondering whether we're making the common error of seeing patterns that don’t exist. Oh well, two out of three isn’t bad.
Perhaps I could reduce the momentum effect by hiding the previous scores when each new item is assessed. In any event, I’ve always felt the real value of this process was in the discussions surrounding the scoring rather than the scores themselves. As I said, the scores are usually irrelevant because the winner is apparent before we finish.
Still, having a clear winner doesn’t mean we made the right choice. The best I can say is that clients have rarely reported unpleasant surprises after deployment. We may not have made the best choice, but at least we understood what we were getting into.
I guess it’s no surprise that I’d conclude my process is a good one. Indeed, research warns that people see what they want to see (the technical term is “confirmation bias”; the colloquial term is “pride”). But I honestly don’t see much of an alternative. Making quick judgments on incomplete information is surely less effective, and gathering data without any formal integration seems hopelessly subjective. Perhaps the latter approach is what Lehrer’s research points to, but I’d (self-servingly) argue that software choices fall into the category of unfamiliar problems, which the brain hasn’t trained itself to solve through intuition alone.
Sunday, 6 May 2012
What Brain Research Teaches about Selecting Marketing Automation Software
Posted on 17:39 by Unknown
Subscribe to:
Post Comments (Atom)
0 comments:
Post a Comment