Posts Tagged ‘survey bias’

Sending Surveys to Your Customer List? Building a House Panel May Be Better

November 30, 2010

Many times when companies need information quickly, they conduct brief surveys. A single organization may have hundreds of individual accounts with online survey tools like Zoomerang and SurveyMoney, and each of those employees assigned to such an account may send out surveys of his/her own, depending on the needs of his or her department. The respondents for these surveys is most frequently drawn from the customer list, often pulled from an internal database or from the sales force’s contact management software. This can be a bad idea.

Essentially, what is happening here is that there is no designated owner for marketing research – particularly surveys – in these organizations. As a result, everyone takes it upon himself or herself to collect data via a survey. Since many of these departments have no formal training in questionnaire design, sampling theory, or data analysis, they are bound to get biased, useless results. Moreover, not only does the research process degrade, but customers get confused by incorrectly worded questions and overwhelmed by too many surveys in such a short period of time, causing response rates to go down.

In the November 2010 issue of Quirk’s Marketing Research Review, Jeffrey Henning, the founder and vice president of strategy at Vovici, said that companies must first recognize that customer feedback is an asset and then treat it as such. One way to do that would be to build a house panel – a panel developed internally for the organization’s own use.

To do this, there must be a designated panel owner who is responsible for developing the panel. This should fall within the marketing department, and more precisely, the marketing research group. The panel owner must be charged with understanding the survey needs of each stakeholder; the types of information often sought; the customers who are to be recruited to or excluded from the panel; the information to be captured about each panel member; the maintenance of the panel; and the rules governing how often a panelist is to be surveyed, or which panelists get selected for a particular survey. In addition, all surveys should requisitioned by the interested departments to the marketing research group, who can then ensure best practices using the house panel are being followed and that duplication of effort is minimized if not eliminated.

A house panel can take some time to develop. However, house panels are far preferable to dirty, disparate customer lists, as they preserve customers’ willingness to participate in surveys, ensure that surveys are designed to capture the correct information, and make possible that the insights they generate are actionable.

*************************

Be Sure to Follow us on Facebook and Twitter !

Thanks to all of you, Analysights now has nearly 200 fans on Facebook … and we’d love more! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! And if you like us that much, please also pass these posts on to your friends who like forecasting and invite them to “Like” Analysights! By “Like-ing” us on Facebook, you and they will be informed every time a new blog post has been published, or when new information comes out. Check out our Facebook page! You can also follow us on Twitter. Thanks for your help!

Advertisements

Help! Customer Satisfaction is High But Sales are Down!

July 28, 2010

Customer satisfaction measurement has been of great interest to service organizations for some years now. Nearly every industry that is both highly competitive and heavily customer-facing – like restaurants, hotels, and banks – know that a poor customer experience can result in lost future sales to the competition. As a result, these service-oriented businesses make every effort to keep their ear open to the voice of the customer. Indeed, customer satisfaction surveys proliferate – I once received five in a single week – as company after company strives to hear that customer voice.

And the effort may be futile. This isn’t to say that measuring customer satisfaction isn’t important – most certainly it is. But many companies may be overdoing it. In fact, some companies are seeing negative correlations between customer satisfaction and repeat business! Is this happening to you?

Reasons Why Satisfaction Scores and Sales Don’t Sync

If your customers are praising you in satisfaction surveys but you’re seeing no improvement in sales and repeat business, it could be for one or more of the following reasons:

You’re not Asking the Question Right

Often, a disparity between survey results and actual business results can be attributed to the two measuring different things. If you simply ask, “Overall, how satisfied were you with your stay at XYZ Hotel,” it only tells you about their current experience. If 80 percent of your respondents indicate “Satisfied” or “Very Satisfied,” you only get information about their attitudes. Then you compare satisfaction scores to either total sales or repeat sales from quarter to quarter. And you find either no correlation or a negative correlation. Why? Because the survey question measured only their perceived satisfaction, while the business results measured sales.

On the other hand, if you were to ask the question: “How likely are you to return to XYZ Hotel,” or “How likely are you to recommend XYZ Hotel to a friend or relative,” you might get a better match between responses and business outcomes.

Only Your Happiest Customers Are Responding

Another reason satisfaction scores may be high while sales are declining is because only your most loyal customers are taking the time to complete your survey. Your most loyal customers might have been trained to complete these surveys because they have been spoiled with special incentives because of their frequent patronage, and hence get better treatment than most customers.

Another, more dangerous, reason your happiest customers may be the only respondents is because the distribution of the survey is “managed,” being sent only to the people most likely to give high scores. There is a great risk of this bias in organizations where top executives’ compensation is tied to customer satisfaction scores.

Respondents Aren’t Telling the Truth

As much as we hate to admit, we’re not as honest as we claim to be. This is especially true in surveys. Entire books could be written on respondent honesty (or lack thereof). There are several reasons respondents don’t give truthful answers about their satisfaction. One obvious reason is courtesy; some just don’t like to give negative feedback. Still, even with the promise of confidentiality, respondents worry that if they give a poor rating, they’ll receive a phone call from the business’ representative, which they aren’t comfortable taking.

Survey incentives – if not carefully structured – can also lead to untruthful respondents. If you offer respondents a chance to win a drawing in exchange for completing your customer satisfaction survey, they may lie and say positive things about their experience, in the hopes that it would increase their odds of winning the prize.

You’re Hearing Your Customer but Not Really Listening

In many cases, your customers might say one thing, but really mean another. The customer could be quite satisfied on the whole, but there might be one or two smaller things, that if unchecked, can reduce the customer’s likelihood of repeat business. For example, if you sell clothing online, but not shoes, and your customer doesn’t find out until after loading everything else into the online shopping cart, assuming he/she doesn’t abandon the cart, the customer completes the order for the clothes he or she wants. When the customer gets the survey, he or she might indicate being very satisfied with the order he/she executed. But deep down, that same customer might not have liked that your online store doesn’t sell shoes. Whether or not the customer indicates the issue about the shoes, the next time he/she wants to buy clothes online, the customer may remember that you don’t sell shoes and choose to place the entire order with a competitor who does.

How Can We Remedy This Disparity?

There are a few ways we can remedy these situations. First, make sure the questions you ask reflect your business goals. If you want satisfied customers to return, be sure to ask how likely they are to return. Then measure the scores against actual repeat business. If you want satisfied customers to recommend your business to a friend, make sure you ask how likely they are to do so and then measure that against referrals. Compare apples to apples.

Second, reduce incentives for bias. Ideally, no executive’s compensation should be tied to survey ratings. Instead, tie compensation to actual results. If compensation must be tied to survey results, then by all means make sure the survey is administered by employees with no vested interest in the outcome of the survey. Also, make sure that your entire list of people to survey comes from similarly disinterested employees of the organization.

Third, encourage non-loyal customers to participate. You might create a separate survey for your most loyal customers. For the non-loyal customers, make sure you have ways to encourage them to respond. Whether it’s through an appropriate incentive (say a coupon for a future visit), or through friendly requests, let your non-loyal customers know you still care about their feedback.

Fourth, place reliability checks in your survey. Ask the same question in two ways (positive and negative) or phrase it slightly differently and compare the results. In the former example, you would expect the answers to be on opposite ends of the rating scale. In the latter, you would expect consistency of responses on the same end of the scale. This helps you determine whether respondents are being truthful.

Finally, be proactive. In the example of your online clothing store, you might have the foresight to realize that your decision not to sell shoes may impact satisfaction and future business. So you might be upfront about it, but at the same time, offer a link to a cooperating online retailer who does sell shoes, and allow the customer to order shoes from that retailer using the same shopping cart. That may keep the customer’s satisfaction high and increase his/her likelihood of future business.


 

*************************

If you Like Our Posts, Then “Like” Us on Facebook and Twitter!

Analysights is now doing the social media thing! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when other information comes out. Check out our Facebook page! You can also follow us on Twitter.

Avoiding Biased Survey Questions

July 19, 2010

Adequate thought must be given to designing a questionnaire. Ask the wrong questions, or ask questions the wrong way, and you can end up with useless information; make the survey difficult or cumbersome, and respondents won’t participate; put the questions in the wrong order and you can end up with biased results. The most common problem with wording survey questions is bias. Biased questions are frequently asked in surveys administered by groups or organizations that are seeking to advance their political or social action agendas, or by certain departments or units within a corporation or organization likewise seeking to improve their political standing within the organization. Consider the questions below:

“Do you think the senseless war in Iraq that President Bush insisted on starting is going to result in thousands of unnecessary deaths?”

“Do you think the unprecedented trillion-dollar federal deficit the Democrats are creating with their out-of-control spending is going to have disastrous consequences for our nation?”

“Do you favor repeal of the death tax, so that many families won’t be unfairly burdened with hefty taxes at the time of their grief?”

Could these questions be more biased? Notice the adjectives in the questions, words like “senseless,” “unnecessary,” “unprecedented,” “out-of-control,” “disastrous,” “unfairly,” “burdened,” and “hefty.” All of them make it clear that a certain answer to each question is expected.

Look also at the descriptive words in some of the questions: “trillion-dollar,” “death” (as opposed to “estate”). You can see further manipulation. Worded the way they are, these questions stir up the emotions, which surveys are not supposed to do.

Removing the Bias

Can these questions be improved? Depending on the objectives of the survey, most definitely. In the first question, we might simply change the question to a multiple choice and ask:

What is your opinion regarding President Bush’s decision to send troops to Iraq?

Totally Sensible

Mostly Sensible

Somewhat Sensible

Not Sure

Somewhat Senseless

Mostly Senseless

Totally Senseless

 

Notice the difference? Here, the question is neutral. It also opens the survey taker to options that reflect the degree to which he/she feels about President Bush’s decision.

How about the second question? Perhaps we can try this:

In your opinion, how serious will the consequences of the federal budget deficit be for the nation?

Very Serious (5)

Serious (4)

Slightly Serious (3)

Not Very Serious (2)

Not at All Serious (1)

 

Here, we again neutralize the tone of the question and we let the respondent decide how severe the impact of the deficit will be. Notice also that we used an unbalanced scale, like we discussed last week. That’s because we would expect more respondents to select choices on the left hand side of the scale. This revised question focused on the seriousness of the deficit. We could also ask respondents about their perceptions of the size of deficit:

How do you feel about the size of the federal budget deficit?

Too Large (5)

Very Large (4)

Slightly Large (3)

Just Right (2)

Too Small (1)

 

Again, we use an unbalanced scale for this one. If we asked both the revised questions, we can gain great insights into the respondent’s perceptions of both the size and seriousness of the deficit. Ideally, we would ask the question about the deficit’s size before the question about its consequences.

These two revised questions should also point out another flaw with the original question: not only was it worded with bias, but it was also multipurpose or double-barreled. It was trying to fuse two thoughts about the deficit: it was too large and it was going to have serious consequences. These two revised questions will give us another advantage: we can now see how many people think the deficit is too large but do not see it as a serious threat. After all, we may agree something is excessive but we may not necessarily agree about the impact of that excess.

Now let’s look at the last question. Perhaps we can focus on the fairness of the estate tax:

What is your opinion regarding the fairness of the estate tax?

Absolutely Fair

Mostly Fair

Not Sure

Mostly Unfair

Absolutely Unfair

 

Of course, some respondents might not know what the estate tax is, so we need to describe it to them. Even in describing or defining something, we can open the door to bias, so we must choose our words carefully:

When a person dies, his or her heirs pay taxes on the amount of his/her estate that exceeds $1 million. This is known as the “estate” tax. What is your opinion regarding the fairness of such a tax?

Absolutely Fair

Mostly Fair

Not Sure

Mostly Unfair

Absolutely Unfair

 

This does a good job of describing the estate tax, but putting in the $1 million dollar figure can bias the results. If a respondent’s net worth is nowhere close to $1 million, he or she may consider the estate tax fair, just his or her heirs are unlikely to be affected by it. Perhaps the question can be worded this way:

When a person dies, a portion of his or her estate is subject to an “estate” tax. Would you say that such a tax is:

Absolutely Fair

Somewhat Fair

Not Sure

Somewhat Unfair

Absolutely Unfair

 

I think this example would be better, since it says “a portion” rather than a specific amount. While the $1 million example is more factual, it also adds in more normative considerations. By using “a portion,” we respondents won’t concentrate on the dollar amount of the estate, but on the fairness of the estate tax.

The adage “It’s not what you say but how you say it,” rings very true in questionnaire design. You must choose your words carefully in order to get the information you need to make well-informed business decisions.

*************************

If you Like Our Posts, Then “Like” Us on Facebook and Twitter!

Analysights is now doing the social media thing! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when other information comes out. Check out our Facebook page! You can also follow us on Twitter.

Radio Commercial Statistic: Another Example of Lies, Damn Lies, and then Statistics

May 10, 2010

Each morning, I awake to my favorite radio station, and the last few days, I’ve awakened to a commercial about a teaming up of Feeding America and the reality show Biggest Loser to support food banks.  While I think that’s a laudable joint venture, I have been somewhat puzzled by, if not leery of, a claim made in the commercial: that “49 million Americans struggled to put food on the table.”  Forty-nine million?  That’s one out of every six Americans! 

Lots of questions popped into my head: Where did this number come from?  How was it determined?  How did the study define “struggling?”  Why were the respondents struggling?  How did the researcher define the implied “enough food?”  What was the length of time these 49 million people went “struggling” for enough food?  And most importantly, what was the motive behind the study?

The Biggest Loser/Feeding America commercial is a good reminder of why we should never take numbers or statistics at face value.  Several things are fishy here.  Does “enough food” mean the standard daily calorie intake (which, incidentally, is another statistic)?  Or, given that two-thirds of Americans are either overweight or obese (another statistic I have trouble believing), is “enough food” defined as the average number of calories a person actually eats each day?

I also want to know how the people who conducted the study came up with 49 million people.  Surely they could not have surveyed so many people.  Most likely, they needed to survey a sample of people, and then make statistical estimations – extrapolations – based on the size of the population.  In order to do that, the sample needed to be selected randomly: that is, every American had to have an equal chance of being selected for the survey.  That’s the only way we could be sure the results are representative of the entire population.

Next, who and how many completed the survey?  The issue of hunger is political in nature, and hence is likely to be very polarizing.  Generally, people who respond to surveys based on such political issues have a vested interest in the subject matter.  This introduces sample bias.  Also, having an adequate sample size (neither too small nor too large) is important.  There’s no way to know if the study that came up with the “49 million” statistic accounted for these issues.

We also don’t know how long a time these 49 million had to struggle in order to be counted?  Was it just any one time during a certain year, or did it have to go for at least two consecutive weeks before it could be contacted?  We’re not told.

As you can see, the commercial’s claim of 49 million “struggling to put food on the table” just doesn’t jive with me.  Whenever you must rely on statistics, you must remember to:

  1. Consider the source of the statistic and its purpose in conducting the research;
  2. Ask how the sample was selected and the study executed, and how many responded;
  3. Understand the researcher’s definition of the variables being measured;
  4. Not look at just the survey’s margin of error, but also at the confidence level and the diversity within the population being sampled. 

The Feeding America/Biggest Loser team-up is great, but that radio claim is a sobering example of how statistics can mislead as well as inform.

Beware of “Professional” Survey Respondents!

April 3, 2009

Thanks to the Internet, conducting surveys has never been easier.  Being able to use the Web to conduct marketing research has greatly reduced the cost and time involved and has democratized the process for many companies.

While online surveys have increased simplicity and cost-savings, they have also given rise to a dangerous breed of respondents – “Professional” survey-takers.   

A “professional” respondent is one who actively seeks out online surveys offering paid incentives – cash, rewards, or some other benefit – for completing the survey.  In fact, many blogs and online articles tell of different sites people can go to find paid online surveys.

If your company conducts online surveys, “professionals” can render your findings useless.  In order for your survey to provide accurate and useful results, the people surveyed must be representative of the population you are measuring and selected randomly (that is, everyone from the population has an equal chance of selection).

“Professionals” subvert the sampling principles of representativeness and randomness simply because they self-select to take the survey.  The survey tool does not know that they are not part of the population to be measured, nor their probability of selection.  What’s more, online surveys exclude persons from the population without Internet access.  This results in a survey bias double-whammy.

In addition, “professionals” may simply go through a survey for the sake of the incentive.  Hence they may speed through it, paying little or no attention to the questions, or they may give untruthful answers.  Now your survey results are both biased and wrong.

 Minimizing the impact of “Professionals”

There are some steps you can take to protect your survey from “professionals,” including:

  • Maintain complete control of your survey distribution.  If possible, use a professional online survey panel company, such as e-Rewards, Greenfield Online, or Harris Interactive.  There are lots of others, and all maintain tight screening processes for their survey participants and tight controls for distribution of your survey;
  • If an online survey panel is out of your budget, perhaps you can build your own controlled e-mail list (following CAN-SPAM laws, of course).  E-mailing your survey is less prone to bias than keeping it on a Web site for anyone to join.
  • Have adequate screening criteria in your survey.  If you can get respondents to sign in using a passcode and/or ask questions at the beginning, which terminate the survey for people whose responses indicate they are not representative of the population, you can reduce the number of “professionals”;
  • Put “speed bumps” into your survey.  An example would be to have a dummy question inside that simply says: “Select the 3rd radio bottom from the top.”  Put two or three bumps in your survey.  A respondent who answers two or more of those bump questions incorrectly is likely to be a speeder and the survey can be instructed to terminate;
  • Ask validation questions.  That is, ask a question one way and then later in the survey ask it in another form, and see if the responses are consistent.  If they’re not, then the respondent may be a “professional” or a speeder.

The Internet may have made marketing research easier, but it has also made it more susceptible to bias.  The tools to conduct marketing research have become much easier and more user-friendly, but that doesn’t change the principles of statistics and marketing research.  Online surveys, no matter how easily, fast, or cheaply they can be implemented, will waste time and money if those principles are violated.