Posts Tagged ‘questionnaire design’

Sending Surveys to Your Customer List? Building a House Panel May Be Better

November 30, 2010

Many times when companies need information quickly, they conduct brief surveys. A single organization may have hundreds of individual accounts with online survey tools like Zoomerang and SurveyMoney, and each of those employees assigned to such an account may send out surveys of his/her own, depending on the needs of his or her department. The respondents for these surveys is most frequently drawn from the customer list, often pulled from an internal database or from the sales force’s contact management software. This can be a bad idea.

Essentially, what is happening here is that there is no designated owner for marketing research – particularly surveys – in these organizations. As a result, everyone takes it upon himself or herself to collect data via a survey. Since many of these departments have no formal training in questionnaire design, sampling theory, or data analysis, they are bound to get biased, useless results. Moreover, not only does the research process degrade, but customers get confused by incorrectly worded questions and overwhelmed by too many surveys in such a short period of time, causing response rates to go down.

In the November 2010 issue of Quirk’s Marketing Research Review, Jeffrey Henning, the founder and vice president of strategy at Vovici, said that companies must first recognize that customer feedback is an asset and then treat it as such. One way to do that would be to build a house panel – a panel developed internally for the organization’s own use.

To do this, there must be a designated panel owner who is responsible for developing the panel. This should fall within the marketing department, and more precisely, the marketing research group. The panel owner must be charged with understanding the survey needs of each stakeholder; the types of information often sought; the customers who are to be recruited to or excluded from the panel; the information to be captured about each panel member; the maintenance of the panel; and the rules governing how often a panelist is to be surveyed, or which panelists get selected for a particular survey. In addition, all surveys should requisitioned by the interested departments to the marketing research group, who can then ensure best practices using the house panel are being followed and that duplication of effort is minimized if not eliminated.

A house panel can take some time to develop. However, house panels are far preferable to dirty, disparate customer lists, as they preserve customers’ willingness to participate in surveys, ensure that surveys are designed to capture the correct information, and make possible that the insights they generate are actionable.

*************************

Be Sure to Follow us on Facebook and Twitter !

Thanks to all of you, Analysights now has nearly 200 fans on Facebook … and we’d love more! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! And if you like us that much, please also pass these posts on to your friends who like forecasting and invite them to “Like” Analysights! By “Like-ing” us on Facebook, you and they will be informed every time a new blog post has been published, or when new information comes out. Check out our Facebook page! You can also follow us on Twitter. Thanks for your help!

Advertisements

Survey Question Dos and Don’ts Redux

October 19, 2010

This past summer, I published a series of posts for Insight Central about effective questionnaire design.  It cannot be stressed enough that survey questions must carefully be thought out in order to obtain information you can act on.  In this month’s issue of Quirk’s Marketing Research Review, Brett Plummer of HSM Group, Ltd. reiterates many of the points made in my earlier posts.

Plummer’s article (you’ll need to enter the code 20101008 in the Article ID blank) provides a series of dos and don’ts when writing survey questions. I’ll summarize them here:

Do:

  1. Keep your research objectives in mind;
  2. Consider the best type of question to ask for each question;
  3. Think about how your going to analyze your data;
  4. Make sure all valid response options are included; and
  5. Consider where you place each question within your survey.

Don’t:

  1. Create confusing or vague questions;
  2. Forget to ensure that the response options to questions are appropriate, thorough, and not overlapping;
  3. Ask leading questions; and
  4. Ask redundant questions.

Plummer does a good job at reminding of the importance of these guidelines and points out that effective survey questions are the key to an organization’s obtaining the highest quantity and quality of actionable information, and thus maximizing its research investment.

Avoiding Biased Survey Questions

July 19, 2010

Adequate thought must be given to designing a questionnaire. Ask the wrong questions, or ask questions the wrong way, and you can end up with useless information; make the survey difficult or cumbersome, and respondents won’t participate; put the questions in the wrong order and you can end up with biased results. The most common problem with wording survey questions is bias. Biased questions are frequently asked in surveys administered by groups or organizations that are seeking to advance their political or social action agendas, or by certain departments or units within a corporation or organization likewise seeking to improve their political standing within the organization. Consider the questions below:

“Do you think the senseless war in Iraq that President Bush insisted on starting is going to result in thousands of unnecessary deaths?”

“Do you think the unprecedented trillion-dollar federal deficit the Democrats are creating with their out-of-control spending is going to have disastrous consequences for our nation?”

“Do you favor repeal of the death tax, so that many families won’t be unfairly burdened with hefty taxes at the time of their grief?”

Could these questions be more biased? Notice the adjectives in the questions, words like “senseless,” “unnecessary,” “unprecedented,” “out-of-control,” “disastrous,” “unfairly,” “burdened,” and “hefty.” All of them make it clear that a certain answer to each question is expected.

Look also at the descriptive words in some of the questions: “trillion-dollar,” “death” (as opposed to “estate”). You can see further manipulation. Worded the way they are, these questions stir up the emotions, which surveys are not supposed to do.

Removing the Bias

Can these questions be improved? Depending on the objectives of the survey, most definitely. In the first question, we might simply change the question to a multiple choice and ask:

What is your opinion regarding President Bush’s decision to send troops to Iraq?

Totally Sensible

Mostly Sensible

Somewhat Sensible

Not Sure

Somewhat Senseless

Mostly Senseless

Totally Senseless

 

Notice the difference? Here, the question is neutral. It also opens the survey taker to options that reflect the degree to which he/she feels about President Bush’s decision.

How about the second question? Perhaps we can try this:

In your opinion, how serious will the consequences of the federal budget deficit be for the nation?

Very Serious (5)

Serious (4)

Slightly Serious (3)

Not Very Serious (2)

Not at All Serious (1)

 

Here, we again neutralize the tone of the question and we let the respondent decide how severe the impact of the deficit will be. Notice also that we used an unbalanced scale, like we discussed last week. That’s because we would expect more respondents to select choices on the left hand side of the scale. This revised question focused on the seriousness of the deficit. We could also ask respondents about their perceptions of the size of deficit:

How do you feel about the size of the federal budget deficit?

Too Large (5)

Very Large (4)

Slightly Large (3)

Just Right (2)

Too Small (1)

 

Again, we use an unbalanced scale for this one. If we asked both the revised questions, we can gain great insights into the respondent’s perceptions of both the size and seriousness of the deficit. Ideally, we would ask the question about the deficit’s size before the question about its consequences.

These two revised questions should also point out another flaw with the original question: not only was it worded with bias, but it was also multipurpose or double-barreled. It was trying to fuse two thoughts about the deficit: it was too large and it was going to have serious consequences. These two revised questions will give us another advantage: we can now see how many people think the deficit is too large but do not see it as a serious threat. After all, we may agree something is excessive but we may not necessarily agree about the impact of that excess.

Now let’s look at the last question. Perhaps we can focus on the fairness of the estate tax:

What is your opinion regarding the fairness of the estate tax?

Absolutely Fair

Mostly Fair

Not Sure

Mostly Unfair

Absolutely Unfair

 

Of course, some respondents might not know what the estate tax is, so we need to describe it to them. Even in describing or defining something, we can open the door to bias, so we must choose our words carefully:

When a person dies, his or her heirs pay taxes on the amount of his/her estate that exceeds $1 million. This is known as the “estate” tax. What is your opinion regarding the fairness of such a tax?

Absolutely Fair

Mostly Fair

Not Sure

Mostly Unfair

Absolutely Unfair

 

This does a good job of describing the estate tax, but putting in the $1 million dollar figure can bias the results. If a respondent’s net worth is nowhere close to $1 million, he or she may consider the estate tax fair, just his or her heirs are unlikely to be affected by it. Perhaps the question can be worded this way:

When a person dies, a portion of his or her estate is subject to an “estate” tax. Would you say that such a tax is:

Absolutely Fair

Somewhat Fair

Not Sure

Somewhat Unfair

Absolutely Unfair

 

I think this example would be better, since it says “a portion” rather than a specific amount. While the $1 million example is more factual, it also adds in more normative considerations. By using “a portion,” we respondents won’t concentrate on the dollar amount of the estate, but on the fairness of the estate tax.

The adage “It’s not what you say but how you say it,” rings very true in questionnaire design. You must choose your words carefully in order to get the information you need to make well-informed business decisions.

*************************

If you Like Our Posts, Then “Like” Us on Facebook and Twitter!

Analysights is now doing the social media thing! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when other information comes out. Check out our Facebook page! You can also follow us on Twitter.

Does the Order of Survey Questions Matter? You Bet!

June 29, 2010

Thanks to online survey tools, the cost of executing a survey has never been cheaper. Online surveys allow companies to ask respondents more questions, ask questions on multiple (though related) topics, and get their results faster and less expensively than was once possible with telephone surveys. But the ability to ask more questions on more topics has meant that the sequence of survey questions must be carefully taken into consideration. While the order of the questions in a survey has always mattered, they are even more crucial now.

Order Bias

When question order is not considered, several problems occur, most notably order bias. Imagine that a restaurant owner was conducting a customer satisfaction survey. With no prior survey background, he creates a short survey, with questions ordered like this:

  1. Please rate the temperature of your entrée.
  2. Please rate the taste of your food.
  3. Please rate the menu selection here.
  4. Please rate the courtesy of your server.
  5. Please rate the service you received.
  6. Please rate your overall experience at this restaurant.

What’s wrong with this line of questioning? Assuming they all have the same answer choices, ranging from “poor” to “excellent,” plenty! First, when there are several questions in sequence with the same rating scales, there’s a great chance a respondent will speed through the survey, providing truthful answers near the beginning of the survey, and less truthful answers further down. By placing the overall satisfaction at the end, the restaurateur is biasing the response to it. Hence, if the respondent had a positive experience to the temperature of his/her food, that might cause a halo effect, making him/her think the taste was also good, as well as the menu selection, etc. Halo effects can also be negative. That first question ends up setting the context in which the respondent views his satisfaction.

On the other hand, if the restaurateur shifted the order of the questionnaire as shown below, he will get more reliable answers:

  1. Please rate your overall experience at this restaurant.
  2. Please rate the menu selection here.
  3. Please rate the temperature of your entrée.
  4. Please rate the taste of your food.
  5. Please rate the service you received.
  6. Please rate the courtesy of your server.

Notice the difference? The restaurateur is starting with the overall satisfaction question, followed by satisfaction with the menu selection. Within the menu selection, the restaurateur asks specifically about temperature and taste of food. Then the restaurateur asks about the service, then specifically about the courtesy of the server. What this process does is to start with the respondent’s overall satisfaction. When a respondent offers an overall rating, he is then asked about a component (either the menu selection or the service) of overall satisfaction, so that the researcher can determine if a low overall satisfaction rating is brought on by a low satisfaction rating with either the menu or service, or both. This leads the respondent to speak truthfully of how each component contributed to his/her satisfaction.

Respondent Confusion/No Coherent Organization

Imagine you had developed a new product and wanted to gauge purchase intent for the new product. There’s a ton of stuff you want to know about: best price to charge, best way to promote the product, where respondents will go to buy it, etc. Many survey neophytes may commingle the pricing, promotion, and distribution questions. This is a mistake! The respondent will become confused and fatigued if there’s no clear organization for your survey. If you are asking a question about those three components, your questionnaire should have three sections. At the start of each section, you should indicate “this section asks you some questions about what you feel the ideal price for this product would be…” or “this section asks you about what features you would like and dislike in this product.” In this fashion, the respondent knows what the line of questioning is and doesn’t feel confused.

Tips for Ordering Survey Questions Effectively

These are just two examples. Essentially, if you want to order your questionnaire for maximum reliability, response, and clarity, remember to:

  1. Start with broad, general questions and move to narrow specific ones. If respondents haven’t formed a general opinion or point of view of your topic, you can start your questionnaire going from specific to general.
  2. As I mentioned in last week’s posts, sensitive questions should be asked late in the survey, after your previous questions have established rapport with the respondent.
  3. Unless the topic is highly sensitive, never start a questionnaire with an open-ended question.
  4. Save demographic and classification questions for the end of the questionnaire, unless you need to ask them in order to screen respondents for taking the survey.
  5. Use chronological sequences in questions when obtaining historical information from a respondent.
  6. Make sure all questions on a topic are complete before moving on to another topic and use transitory statements between the topics, like I described in the prior paragraph.

Much like designing survey questions, the order of the questioning is as much an art as it is a science. Taking time to organize your questions will reward you with results that are reliable and actionable.

Randomized Responses: More Indirect Techniques to Asking Sensitive Survey Questions

June 23, 2010

Yesterday’s post discussed approaches for asking survey questions of a sensitive nature in a way that would make individual respondents more inclined to answer them truthfully. Sometimes, however, you don’t care about the individual respondent’s answer to the sensitive question, but would rather get an idea of the incidence of that sensitive issue among all respondents. Sometimes, knowing the incidence of such a topic is what we need in order to conduct further research, or get an understanding of the market potential for a new product, or decide how to prioritize the allocation of resources for exploiting that instance. The most effective ways to do this are through Randomized Response Techniques, which are useful for assessing group behavior, as opposed to individual behavior.

Let’s assume that you are marketing a new over-the-counter ointment for athlete’s foot to college males, and you want to understand how large a market you have for your ointment. You decide to survey of 100 college males, randomly selected. Asking them if they’ve had athlete’s foot might be something they don’t want to answer, yet you’re not concerned with whether a particular respondent has athlete’s foot, but rather, get an estimate of how many college age men suffer from it.

Try a Coin Toss

One indirect way of finding out the incidence of athlete’s foot among college men might be to ask a question like this:

“Flip a coin (in private) and answer ‘yes’ if either the coin was a head or you’ve suffered from athlete’s foot in the last three months.”

If the respondent answers “yes” to the question, you will not know whether he did so because of the athlete’s foot or because of the coin toss. However, once you’ve compiled all the responses to this question, you can get a good estimate of the incidence of athlete’s foot among college males. You would figure it out as follows:

Total Respondents

100

Number answering “yes”

65

Expected Number of Heads on flip

50

Excess “Yes” over Expected

15

Percent with Athlete’s Foot (15/50)

30%

Generally, when you flip a coin, you expect the results of the toss to come up “heads” about 50% of the time. If 65% of the respondents answer “yes” to the heads/athlete’s foot question, then you are 15 points over the expected value. Dividing that difference by the expected value (50) gives you an estimate that 30% of respondents have athlete’s foot.

Roll the Dice

Another approach would be asking respondents to roll a die and answer one question if the roll comes up anywhere from 1 to 4 and answer another if the roll comes up 5 or 6. If the die comes up as 1-4, the respondent answers the question, “I have had athlete’s foot” with either a “Yes” or a “No.” Respondents whose die roll came up 5 or 6 will need to answer the yes/no question, “I have never had athlete’s foot.”

What is the probability that a respondent has had athlete’s foot? The probability of a “Yes” is determined as follows:

P(YES) = P(Directed to first question)*P(Answering Yes to first question) + P(Directed to second question)*P(Answering Yes to second question)

Remember that respondents have a 100% probability of being assigned to either question. Hence the probability of being directed to the first question must be subtracted from 100 in order to get the probability of being directed to the second question. Expressing the probabilities in decimal form, we modify the probability equation as follows:

P(YES)= P(Directed to first question)*P(Answering Yes to first question) + (1-P(Directed to first question))*(1-P(Answering Yes to first question))

In the above example, the probability of being assigned the first question (for rolling a 1-4) is .67 (four chances out of six, or two-thirds). Now, if 35 respondents indicated “Yes” to “I have had athlete’s foot”, we get the following equation, denoting probability as “P”:

0.35 = 0.67P + 0.33(1-P)

0.35 = 0.67P + 0.33 – 0.33P

0.35-0.33 = 0.67P – 0.33P

0.02 = 0.34P

=5.88%

Hence, 5.88% of respondents will have had athlete’s foot.

Summary

There are several other randomized response techniques you can do, but these two are some examples you might want to try. Note that the dice approach may not be a very reliable estimator, since if 36 respondents indicated “Yes”, then the probability increases to 8.82%; it’s as if a 1% increase in “Yes” responses increases the overall probability of a “Yes” response by almost 3%. Randomized response techniques are good when you don’t care about the individual responses to sensitive information, but want to know the incidence of such behavior within the respondents. By wording questions in this fashion, you can put respondents as ease when asking these questions, and give them the feeling their responses are obscured, all the while gaining estimates of the percentage of the group engaging in said behavior.