Benjamin Disraeli once said, “There are lies, damned lies, and then there are statistics.” Everyday we are bombarded with all kinds of statistics: the unemployment rate, a baseball player’s batting average, starting salaries for teachers, findings from a recent survey, and so on. In the marketing research field, we live, breathe and dream statistics.
Yet, most people do not fully understand statistics and many blindly take them at face value. As a marketing research consultant, I love when – in fact, I insist that – my clients challenge the statistics I give them. Since marketing research can be expensive, clients should make their research vendors defend their claims; it is their right.
Today’s post is to give you an example of how to analyze statistics critically. The example I share is from a UK blog post, entitled, Report: Using analytics can boost email marketing returns, by Greig Daines. In this post, Daines references a report from Nedstat, a European web analytics firm. Let me reiterate before I go any further that my purpose here is NOT to say the claims or findings in this report are wrong or biased; the purpose is to show how to place the findings in perspective.
The Nedstat report claims that, according to a survey it conducted, the e-mail marketing revenues of firms that use web analytics are almost quadruple those of firms that do not. The survey also found that companies can increase their profits from e-mail marketing 18-fold by using web analytics to track and refine their e-mail campaigns. Impressive, isn’t it?
That’s where the critical thinking begins. The first question to ask is: “Who conducted the survey?”, followed by “What is their motive?” Nedstat is a web analytics firm. They certainly want people to see the value of web analytics! Also, you need to ask: “Who was surveyed?” and “How many were surveyed?” According to the report, 159 e-mail marketers in the UK, France, and Germany were surveyed.
“How were these 159 e-mail marketers selected?” If these 159 e-mail marketers were selected randomly – that is every e-mail marketer in these three countries had an equal chance of being selected for the survey, then Nedstat has a representative sample. On the other hand, if these 159 marketers self-selected to do the survey, the sample is not representative, and the claims made above cannot be generalized to all e-mail marketers in those countries.
“How many people were invited to take the survey?” Surely, 159 marketers were surveyed; but what if 1,590 marketers in all had been invited to participate? Then only 10% of those invited responded. Are the 159 e-mail marketers different from the 1,431 e-mail marketers who did not respond? Perhaps the 1,431 non-responders felt that their web analytics efforts were not working, and didn’t want to divulge that in the survey. Maybe they were very successful and didn’t want to alert their competition. Maybe the 159 responders had a vested interest in the field web analytics and wanted to sound off. If any of these are the case, the survey findings are bogus.
Other questions you need to ask: “What was the average revenue and profit of the e-mail marketers who did and didn’t use web analytics.” 18-fold and four-fold don’t mean anything until you know the average. “What was the standard deviation for revenues and profits?” That is, how spread out is the data. There are many more questions you can ask, but these are enough to get you in the driver’s seat.
- Make sure your market research vendor clearly explains its methodology for data collection and analysis;
- Consider the source of the data. There’s always a purpose for their publishing those numbers;
- Make sure you are given all the statistics you need to know the full story so that you can make the most informed decision; and
- Go with your gut. If research findings sound too good to be true, most likely they are. Challenge your vendor all the more.