I recently ran a pair of two-day workshops on marketing performance measurement. My students had a variety of goals, but the two major ones they mentioned were the toughest issues in marketing: how to allocate resources across different channels and how to measure the impact of marketing on brand value.
Both questions have standard answers. Channel allocation is handled by marketing mix models, which analyze historical data to determine the relative impact of different types of spending. Brand value is measured by assessing the important customer attitudes in a given market and how a particular brand matches those attitudes.
Yet, despite my typically eloquent and detailed explanations, my students found these answers unsatisfactory. Cost was one obstacle for most of them; lack of data was another. They really wanted something simpler.
I’d love to report I gave it to them, but I couldn't. I had researched these topics thoroughly as preparation for the workshops and hadn’t found any alternatives to the standard approaches; further research since then still hasn’t turned up anything else of substance. Channel allocation and brand value are inherently complex and there just are no simple ways to measure them.
The best I could suggest was to use proxy data when a thorough analysis is not possible due to cost or data constraints. For channel allocation, the proxy might be incremental return on investment by channel: switching funds from low ROI to high ROI channels doesn’t really measure the impact of the change in marketing mix, but it should lead to an improvement in the average level of performance. Similarly, surveys to measure changes in customer attitudes toward a brand don’t yield a financial measure of brand value, but do show whether it is improving or getting worse. Some compromise is unavoidable here: companies not willing or able to invest in a rigorous solution must accept that their answers will be imprecise.
This round of answers was little better received than the first. Even ROI and customer attitudes are not always available, and they are particularly hard to measure in multi-channel environments where the result of a particular marketing effort cannot easily be isolated. You can try still simpler measures, such as spending or responses for channel performance or market share for brand value. But these are so far removed from the original question that it’s difficult to present them as meaningful answers.
The other approach I suggested was testing. The goal here is to manufacture data where none exists, thereby creating something to measure. This turned out to be a key concept throughout the performance measurement discussions. Testing also shows that marketers are at least doing something rigorous, thereby helping satisfy critics who feel marketing investments are totally arbitrary. Of course, this is a political rather than analytical approach, but politics are important. The final benefit of testing is it gives a platform for continuous improvement: even though you may not know the absolute value of any particular marketing effort, a test tells whether one option or another is relatively superior. Over time, this allows a measurable gain in results compared with the original levels. Eventually it may provide benchmarks to compare different marketing efforts against each other, helping with both channel allocation and brand value as well.
Even testing isn’t always possible, as my students were quick to point out. My answer at that point was simply that you have to seek situations where you can test: for example, Web efforts are often more measurable than conventional channels. Web results may not mirror results in other channels, because Web customers may themselves be very different from the rest of the world. But this again gets back to the issue of doing the best with the resources at hand: some information is better than none, so long as you keep in mind the limits of what you’re working with.
I also suggested that testing is more possible than marketers sometimes think, if they really make testing a priority. This means selecting channels in part on the basis of whether testing is possible; designing programs so testing is built in; and investing more heavily in test activities themselves (such as incentives for survey participants). This approach may ultimately lead to a bias in favor of testable channels—something that seems excessive at first: you wouldn’t want to discard an effective channel simply because you couldn’t test it. But it makes some sense if you realize that testable channels can be improved continuously, while results in untestable channels are likely to stagnate. Given this dynamic, testable channels will sooner or later become more productive than untestable channels. This holds even if the testable channels are less efficient at the start.
I offered all these considerations to my students, and may have seen a few lightbulbs switch on. It was hard to tell: by the time we had gotten this far into the discussion, everyone was fairly tired. But I think it’s ultimately the best advice I could have given them: focus on testing and measuring what you can, and make the best use possible of the resulting knowledge. It may not directly answer your immediate questions, but you will learn how to make the most effective use of your marketing resources, and that’s the goal you are ultimately pursuing.
Tuesday, 2 October 2007
Marketing Performance Measurement: No Answers to the Really Tough Questions
Posted on 11:47 by Unknown
Subscribe to:
Post Comments (Atom)
0 comments:
Post a Comment