Posts Tagged ‘respondents’

Consider Respondents When Using Rating Scale Questions in Surveys

July 13, 2010

The art of questionnaire design is full of so many minute details, especially for designing rating scales. The considerations for ratings questions are as normative as they are numerous: how many ratings points to use? Even or odd number of points? Balanced or unbalanced scale? Forced or unforced choice? There are many options, and many researchers default to a five or 10-point rating scale just out of rote or past experience. A poorly – or overly painstakingly – chosen rating scale can lead to biased responses, respondent fatigue and abandonment, and useless results. When deciding on what rating scales to use, it is most important to consider first who your respondents are.

How Many Points?

The number of points to use in a rating scale can be challenging. Use too few points, and you may not get much precise data; use too many, and you may confuse or tire your respondents. Just how many points are appropriate depends on your audience. If your respondents are likely to skew either heavily positive or heavily negative, then you might want to opt for more points, like a seven to 10-point scale. This is because people who are generally positive (or negative) toward your company or product can have different intensities in their attitudes and agreements.

Let’s assume a professional association conducts a survey of its members and asks the question “Overall, how satisfied are you with your membership in our organization?” Consider a five-point scale below:

Generally, if 80% of the association’s members were either “satisfied” or “very satisfied,” it’s really of no value to the association. There’s no way to gauge the intensity of their satisfaction. But, if the association were to use a nine-point scale like this one:

                 

Then those 80% of satisfied members will be more spread out in terms of their satisfaction. For example, if 80% of respondents give a score greater than 5, but only 10% give a score of 9, then the association has an approximation of its hardest core supporters, and then has a better idea of how fluid member satisfaction is. It can then focus on developing programs that graduate members from the six to eight ratings towards a nine.

Also, the lengthier scale can be useful if you’re using this question’s responses as a dependent variable when performing regression analysis, using the responses of other questions to predict responses to this question. These options are not available with the five-point scale. Of course, a seven-point scale might be used instead of a nine-point, depending on the degree of skewness in responses.

How do you Determine the Degree of Respondent Skewness Before Administering the Survey?

It can be hard to know in advance how respondents will rate and whether the ratings will be normally distributed or skewed. There are two ways to find out: past surveys and pilot surveys.

Past Surveys

If the association has conducted this membership satisfaction survey in the past, it might see how respondents have traditionally fallen. If responses have generally been normally distributed, and the association has been using a five-point scale, then the association might want to stay the course.

On the other hand, if the association finds that past surveys are falling lopsidedly on one side of the five-point survey, then it might want to consider increasing the length of the scale. Or, if the association was using a seven or nine-point scale previously and finding sparse responses on both ends (because of the wide lengths), it may choose to collapse the scales down to five points.

Making changes to survey scales based on past survey responses can be problematic, however, if the past surveys are used for benchmarking. Care must be exercised to ensure that the results of the modified scale are easily translatable or imputable to the results of the past survey scales, so that comparability is maintained.

Pilot Surveys

The association can also use a pilot survey as a litmus test for the spread among respondent opinion. If the association is unsure how members will score on certain rating questions, it might send out two or three versions of the same questions to a very small sample of its membership, one testing a five-point, another a seven-point, and the other a nine-point. If results come back with a normal distribution on the five-point, and more sparse and spread out on the seven and nine point scales, then the association knows that a five-point scale is appropriate.

If, on the other hand, the association notices concentration on one end of the scale for all three versions, then it can look at the seven and the nine-point tests. If it sees more sparseness in the nine-point scale, then it may opt for the seven-point scale. Otherwise, it may choose to go with the nine-point scale.

Of course, for the pilot survey to work, each member of the association must have an equal chance of selection. Of those members who do receive the pilot survey, each must also have an equal chance of getting one of the three versions. This ensures a random probability sample which can be generalized to the association’s full membership base.

As you can see, there are lots of considerations involved in constructing a rating scale question. In tomorrow’s blog post, we’ll discuss whether it’s best to use an even or odd number of points, and hence, forced and unforced choices. 

*************************************

Let Analysights Take the Pain out of Survey Design!

Rating scales are but one of the important things you need to consider when designing an effective survey.  If you need to design a survey that gets to the heart of what you need to know in order for your company to achieve marketing success, call on Analysights.  We will take the drudgery out of designing your survey, so you can concentrate on running your business.  Check out our Web site or call (847) 895-2565.

Advertisements

A Typical-Length Survey or A Few Shorter Ones?

June 28, 2010

Most online surveys today take between 10 and 15 minutes, with a few going as long as 25 to 30 minutes. As marketing researchers, we have long pontificated that surveys should be a reasonable length, as longer ones tend to cause respondents to disengage in many ways: speeding through, skipping questions, even abandoning the survey. Most marketers realize this, and the 10-15 minute survey seems to be the norm. But I wonder how many marketing researchers – on both the client and supplier side – have ever considered the length of a survey from a strategic, rather than a tactical, point of view.

Sure, a typical-length survey is not super long, and is often cost effective for a client. After all, the client can survey several people about several topics in a relatively short time, for a set price, and can get results quickly. But sometimes I believe that instead of one 15-minute survey, some clients might benefit more by conducting two 7- or 8-minute, or three 5-minute surveys, stretched out over time. Marketing researchers on both sides will likely disagree with me here. After all, multiple shorter surveys can cost more to administer. However, I believe that – in the long-run – clients will derive value from the more frequent, shorter surveys that would offset their cost. Multiple, shorter surveys will benefit clients in the following ways:

Focus

As marketing research suppliers, it is our job to make sure we understand the client’s key business problem. Many times, clients have several problems that must be addressed. We need to help clients look at all of their business problems and prioritize them in the order of benefit that their resolution would bring. If we could get the client’s survey focused on the one or two problems whose resolution would result in the most positive difference, we can keep the survey short, with more targeted questions. As a result, the client doesn’t get bombarded with tons of data tables or reports with lots of recommendations and end up immobilized wondering which ones should be implemented first. On the contrary, the client will receive a few, very direct, insights about how to respond to these key problems.

Reduced Incentive Costs

Since surveys are shorter, respondents may be willing to do them for little or no incentive. This can save the client money.

Higher Response Rates

Surveys that are 10-15 minutes long generally get decent response rates. However, a survey that’s 3, 5, or 7 minutes long will likely get excellent response rates. Why? Because they’re more convenient, straight to the point, and can be knocked off quickly. As a result, respondents are less willing to put it off. Respondents are also less likely to terminate the survey, speed through it, or skip questions.

Increased Trust by Respondents

Because you didn’t waste their time with the first survey, respondents may be more inclined to participate in your subsequent surveys. If they took your 5-minute survey today, then you send them another 5-minute survey four to six weeks from now, they are likely to trust that this survey won’t take long either, and will likely respond to it. Of course, the key here is to space the surveys out. You don’t want to send all three at once!

More Reliable Data

As mentioned above, respondents are less likely to speed, terminate, or skip questions to a short survey than they are with a longer one. As a result, there will be less non-response error and more truthful responses in the data, and hence more trustworthy findings.

Ability to Act on Results Faster

Because the survey is short and to-the-point, and response rates are higher, the client can achieve the desired number of completed surveys sooner than if the survey were longer, so the survey doesn’t have to be in the field as long. And because the survey is short, the time the marketing research firm needs to tabulate and analyze the data is much shorter. Hence the client can start acting on the insights and implementing the recommendations much sooner.

Discovery

What would happen if a client conducted a typical-length survey and found a theme emerging in open-ended questions or a trend in responses among a certain demographic group? The client may want to study that. But custom research is expensive. If the client did a typical-length survey, the budget may not be there to do another survey to investigate that newly discovered theme or trend. With a shorter survey, the cost may be somewhat lower, so funds might be left in the budget for another survey. In addition, if the client is scheduling subsequent shorter surveys, the learnings from the first survey can be used to shape questions for further investigation in those upcoming surveys.

The Shorter Survey May Be Enough

Several times, problems are interconnected, or generated by other problems. If research suppliers helped clients isolate their one or two biggest problems, and focused on those, the client might act on the insights and eliminate those problems. The resolution of those problems may also provide solutions to, or help extinguish, the lesser-priority problems. As a result, future surveys may not be needed. In that case, the research supplier did its job – solving the client’s problem in the shortest, most economical, and most effective manner possible.

Granted, many clients probably can’t do things this way. There are economies of scale in doing one longer survey as opposed to two or three shorter ones. Moreover, the client probably has several stakeholders, each of whom has a different opinion of which problem is most important. And each problem may have a different urgency to those stakeholders. This is why it is so important for the research supplier to get the client’s stakeholders and top management on board with this. As research suppliers, it is our job to inform and educate the client and its stakeholders on the research approach that maximizes the best interest of the client as a whole; and if that is not possible, work with those stakeholders to identify second-best solutions. But once the key issues – problems, budget, politics, and urgency – are on the table, research suppliers can work with the client to develop the shortest, most focused, most cost effective survey possible.

Asking Sensitive Survey Questions

June 22, 2010

As marketers, sometimes we need to get information from respondents that they may not be willing to volunteer freely. When confronted with such inquiries, people may ignore the question, provide either untrue or incomplete responses, or even terminate the survey. Yet often, the survey often provides the only feasible means of obtaining information about a respondent’s religious affiliation, race, income, or other sensitive information. What’s a marketer to do? There are several ways around it:

Build Rapport with Respondent

Quite often, it is best to start a survey with neutral questions, and let the respondent work his or her way through the survey, letting each question lead up to the information you need to ask about. Placing controversial questions late in the questionnaire has two benefits. First, if the respondent chooses to stop the survey once he or she reaches the sensitive questions, you still have the respondent’s answers to all questions beforehand, which you can use for other analyses. Secondly, as the respondent works through the easy, unthreatening questions, he or she may feel as though trust is being established, and will be more likely to answer the question asking the sensitive information.

Be Casual About it!

Let’s assume you are trying to measure the incidence of tax cheating. Getting truthful responses can be very difficult. Try reducing the perceived importance of the topic by asking the question in a nonchalant manner: “Did you happen to have ever cheated on your taxes?” Worded this way, the question leads the respondent to believe the survey’s authors do not think that tax cheating is a big deal, so the respondent may be coaxed to answer truthfully.

Make it Sound Like “Everybody’s Doing It!”

Instead of directly asking a respondent if he or she cheats on his/her taxes, ask if they know of anyone who does. “Do you know any people who cheated on their taxes?” Then the next question could be, “How about you?” When he or she feels he/she isn’t alone, the respondent may be more inclined to be honest. Another way is to combine the casual approach with this one: “As you know, many people have been cheating on their taxes these days. Do you happen to have cheated on yours?”

Choose Longer Questions Instead of Shorter Ones

Longer questions can “soften the blow” with the excess verbiage, and reduce the threat. Consider these examples:

  1. “Even the most liberal people don’t pay their fair share of taxes to the government. Have you, yourself, not reported all your income to the government in the past two years?”
  2. “The Investors Business Daily recently reported on the widespread practice of middle class Americans to not report all their income for tax purposes. Have you happened to report less than all your income at tax time?”
  3. “Did things come up that kept you from reporting all your income to the IRS, or did you happen to report all your income?”

Note the patterns here. In the first question, we again make it sound like everyone is cheating on taxes. In the second, we appeal to an authority. In the third, we make it sound like circumstances beyond the respondent’s control made him or her unable to report all his income.

Try Some Projective Techniques

Make it sound like the respondent is just giving an estimate about someone else. Ask, “As your best guess, approximately what percentage of people in your community fail to report all their income at tax time?” When asked this way, a respondent might base the response on his or her own personal experience.

Try a Hierarchy of Sensitive Issues

Have a question that shows a list of answers ordered from least sensitive to most sensitive. A question like this:

“In the past 12 months or so, which of the following have you done? (Select all that apply):

    “Wear your shirt inside out”

     “Forget to hand in homework”

    “Lock your keys in the car while it was still running”

    “Discipline your child by spanking”

    “Take money out of your spouse’s wallet”

    “Meet an ex-girl/boyfriend behind your spouse’s back”

    “Withhold some information about your income at tax time”

    “Falsely accuse your neighbor of tax dodging”

Notice how this question moves the respondent from less threatening to very threatening answer choices. And by keeping the taxes part embedded – not the very first or the very last – the respondent sees that there are much worse behaviors than tax cheating he/she can admit to. Hence, the he/she is more likely to be truthful.

Summary

Questionnaire design is as much an art as it is a science, and wording sensitive questions is almost entirely an art. By building trust with your respondent, making him/her feel that it’s purely human to have the issue/behavior you’re trying to get the respondent to talk about, and finding soft, indirect ways to pierce the issue, you can get him or her to contribute more truthfully and calmly. As they say, “You attract more flies with honey than you do with vinegar!”