Posts Tagged ‘survey’

Sending Surveys to Your Customer List? Building a House Panel May Be Better

November 30, 2010

Many times when companies need information quickly, they conduct brief surveys. A single organization may have hundreds of individual accounts with online survey tools like Zoomerang and SurveyMoney, and each of those employees assigned to such an account may send out surveys of his/her own, depending on the needs of his or her department. The respondents for these surveys is most frequently drawn from the customer list, often pulled from an internal database or from the sales force’s contact management software. This can be a bad idea.

Essentially, what is happening here is that there is no designated owner for marketing research – particularly surveys – in these organizations. As a result, everyone takes it upon himself or herself to collect data via a survey. Since many of these departments have no formal training in questionnaire design, sampling theory, or data analysis, they are bound to get biased, useless results. Moreover, not only does the research process degrade, but customers get confused by incorrectly worded questions and overwhelmed by too many surveys in such a short period of time, causing response rates to go down.

In the November 2010 issue of Quirk’s Marketing Research Review, Jeffrey Henning, the founder and vice president of strategy at Vovici, said that companies must first recognize that customer feedback is an asset and then treat it as such. One way to do that would be to build a house panel – a panel developed internally for the organization’s own use.

To do this, there must be a designated panel owner who is responsible for developing the panel. This should fall within the marketing department, and more precisely, the marketing research group. The panel owner must be charged with understanding the survey needs of each stakeholder; the types of information often sought; the customers who are to be recruited to or excluded from the panel; the information to be captured about each panel member; the maintenance of the panel; and the rules governing how often a panelist is to be surveyed, or which panelists get selected for a particular survey. In addition, all surveys should requisitioned by the interested departments to the marketing research group, who can then ensure best practices using the house panel are being followed and that duplication of effort is minimized if not eliminated.

A house panel can take some time to develop. However, house panels are far preferable to dirty, disparate customer lists, as they preserve customers’ willingness to participate in surveys, ensure that surveys are designed to capture the correct information, and make possible that the insights they generate are actionable.

*************************

Be Sure to Follow us on Facebook and Twitter !

Thanks to all of you, Analysights now has nearly 200 fans on Facebook … and we’d love more! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! And if you like us that much, please also pass these posts on to your friends who like forecasting and invite them to “Like” Analysights! By “Like-ing” us on Facebook, you and they will be informed every time a new blog post has been published, or when new information comes out. Check out our Facebook page! You can also follow us on Twitter. Thanks for your help!

Advertisements

Help! Customer Satisfaction is High But Sales are Down!

July 28, 2010

Customer satisfaction measurement has been of great interest to service organizations for some years now. Nearly every industry that is both highly competitive and heavily customer-facing – like restaurants, hotels, and banks – know that a poor customer experience can result in lost future sales to the competition. As a result, these service-oriented businesses make every effort to keep their ear open to the voice of the customer. Indeed, customer satisfaction surveys proliferate – I once received five in a single week – as company after company strives to hear that customer voice.

And the effort may be futile. This isn’t to say that measuring customer satisfaction isn’t important – most certainly it is. But many companies may be overdoing it. In fact, some companies are seeing negative correlations between customer satisfaction and repeat business! Is this happening to you?

Reasons Why Satisfaction Scores and Sales Don’t Sync

If your customers are praising you in satisfaction surveys but you’re seeing no improvement in sales and repeat business, it could be for one or more of the following reasons:

You’re not Asking the Question Right

Often, a disparity between survey results and actual business results can be attributed to the two measuring different things. If you simply ask, “Overall, how satisfied were you with your stay at XYZ Hotel,” it only tells you about their current experience. If 80 percent of your respondents indicate “Satisfied” or “Very Satisfied,” you only get information about their attitudes. Then you compare satisfaction scores to either total sales or repeat sales from quarter to quarter. And you find either no correlation or a negative correlation. Why? Because the survey question measured only their perceived satisfaction, while the business results measured sales.

On the other hand, if you were to ask the question: “How likely are you to return to XYZ Hotel,” or “How likely are you to recommend XYZ Hotel to a friend or relative,” you might get a better match between responses and business outcomes.

Only Your Happiest Customers Are Responding

Another reason satisfaction scores may be high while sales are declining is because only your most loyal customers are taking the time to complete your survey. Your most loyal customers might have been trained to complete these surveys because they have been spoiled with special incentives because of their frequent patronage, and hence get better treatment than most customers.

Another, more dangerous, reason your happiest customers may be the only respondents is because the distribution of the survey is “managed,” being sent only to the people most likely to give high scores. There is a great risk of this bias in organizations where top executives’ compensation is tied to customer satisfaction scores.

Respondents Aren’t Telling the Truth

As much as we hate to admit, we’re not as honest as we claim to be. This is especially true in surveys. Entire books could be written on respondent honesty (or lack thereof). There are several reasons respondents don’t give truthful answers about their satisfaction. One obvious reason is courtesy; some just don’t like to give negative feedback. Still, even with the promise of confidentiality, respondents worry that if they give a poor rating, they’ll receive a phone call from the business’ representative, which they aren’t comfortable taking.

Survey incentives – if not carefully structured – can also lead to untruthful respondents. If you offer respondents a chance to win a drawing in exchange for completing your customer satisfaction survey, they may lie and say positive things about their experience, in the hopes that it would increase their odds of winning the prize.

You’re Hearing Your Customer but Not Really Listening

In many cases, your customers might say one thing, but really mean another. The customer could be quite satisfied on the whole, but there might be one or two smaller things, that if unchecked, can reduce the customer’s likelihood of repeat business. For example, if you sell clothing online, but not shoes, and your customer doesn’t find out until after loading everything else into the online shopping cart, assuming he/she doesn’t abandon the cart, the customer completes the order for the clothes he or she wants. When the customer gets the survey, he or she might indicate being very satisfied with the order he/she executed. But deep down, that same customer might not have liked that your online store doesn’t sell shoes. Whether or not the customer indicates the issue about the shoes, the next time he/she wants to buy clothes online, the customer may remember that you don’t sell shoes and choose to place the entire order with a competitor who does.

How Can We Remedy This Disparity?

There are a few ways we can remedy these situations. First, make sure the questions you ask reflect your business goals. If you want satisfied customers to return, be sure to ask how likely they are to return. Then measure the scores against actual repeat business. If you want satisfied customers to recommend your business to a friend, make sure you ask how likely they are to do so and then measure that against referrals. Compare apples to apples.

Second, reduce incentives for bias. Ideally, no executive’s compensation should be tied to survey ratings. Instead, tie compensation to actual results. If compensation must be tied to survey results, then by all means make sure the survey is administered by employees with no vested interest in the outcome of the survey. Also, make sure that your entire list of people to survey comes from similarly disinterested employees of the organization.

Third, encourage non-loyal customers to participate. You might create a separate survey for your most loyal customers. For the non-loyal customers, make sure you have ways to encourage them to respond. Whether it’s through an appropriate incentive (say a coupon for a future visit), or through friendly requests, let your non-loyal customers know you still care about their feedback.

Fourth, place reliability checks in your survey. Ask the same question in two ways (positive and negative) or phrase it slightly differently and compare the results. In the former example, you would expect the answers to be on opposite ends of the rating scale. In the latter, you would expect consistency of responses on the same end of the scale. This helps you determine whether respondents are being truthful.

Finally, be proactive. In the example of your online clothing store, you might have the foresight to realize that your decision not to sell shoes may impact satisfaction and future business. So you might be upfront about it, but at the same time, offer a link to a cooperating online retailer who does sell shoes, and allow the customer to order shoes from that retailer using the same shopping cart. That may keep the customer’s satisfaction high and increase his/her likelihood of future business.


 

*************************

If you Like Our Posts, Then “Like” Us on Facebook and Twitter!

Analysights is now doing the social media thing! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when other information comes out. Check out our Facebook page! You can also follow us on Twitter.

More Survey Rating Scale Discussions

July 14, 2010

Yesterday, we discussed situations in which it might be better to use a longer or shorter rating scale for particular survey questions. Today we’re going to build on that discussion, evaluating when it is best to use either a balanced or unbalanced scale; an even or an odd numbered scale; and a forced/unforced choice scale.

Scales in Balance

Most survey questions default to a balanced scale, meaning that there is an even number of points on the low, or negative, side as there is on the high, or positive, side. The agreement scale below is an example of a balanced rating scale:

Yesterday, we talked about how a longer scale is useful if your respondents are likely to skew heavily towards one end of the scale. Another approach would be to use an unbalanced rating
scale like this:

So, if you know that most of your respondents are going to agree with you, breaking down those who disagree is going to be of little value. But if most respondent will agree, the unbalanced scales achieve the same result as the longer-point scales, namely increasing the about of discrimination in their level of agreement. Generally speaking, if you know your respondents will skew heavily to one side, the unbalanced scale approach is preferable to the larger scale. However, if you are uncertain about how respondents will fall and you don’t have the benefit of past surveys or of doing a pilot survey, then go with a balanced, slightly larger scale. At the very worst, you can collapse the scales based on responses.

Important Note About Unbalanced Scales!

It is important to note that because there is no middle point in an unbalanced scale, you end up with a scale that is ordinal, as opposed to interval. Hence, you cannot properly compute a mean or average response. You must rely on the median response for measures of central tendency.

Odd vs. Even Scale Points

There is even some debate about whether to use even or odd numbers of points in a rating scale. The two examples above illustrate an odd number of scale points. The scale below is an example of an even one:

There are reasons a researcher might prefer an even scale over an odd one. In this example, it forces the respondent to draw his/her line in the sand about a particular point of view. This is particularly useful if the question is on something a person cannot be undecided about, or if an issue is highly charged. By not having a “Neutral” or “No Opinion” option in the middle of the scale, respondents cannot “cop out” by choosing it. However, forcing respondents to choose may cause some to skip the question, answer it incorrectly, or abandon the survey all together.

Forced vs. Unforced Choice Questions

Even point scales lead us into a discussion of forced vs. unforced choice questions. As you can see, the four-point scale example above shows a forced choice: respondents either agree or they disagree. The balanced odd-point scale in the first example illustrates an unforced choice question. You can still use an even point scale that is an unforced choice. You might place an option for “Don’t Know” or “Not Sure” after the “Strongly Agree” choice, and then code it with a “DK” or an “X”, rather than a “5.” Your reasons for choosing forced vs. unforced choices is largely the same as discussed above. It’s worth repeating that if you force respondents to make a choice, it could increase the incidence of non-response bias or incorrect selection, so choose carefully.

In summary, use unbalanced scales when you know in advance that your respondents will skew heavily on one end of the scale; use even-points and/or forced choice questions on topics for which one cannot be apathetic or for highly charged issues. However, be careful to know these conditions to be the case first. Guessing wrong can alienate respondents and cause you to lose their cooperation and honesty.

*************************************

Let Analysights Take the Pain out of Survey Design!

Rating scales are but one of the important things you need to consider when designing an effective survey. If you need to design a survey that gets to the heart of what you need to know in order for your company to achieve marketing success, call on Analysights. We will take the drudgery out of designing your survey, so you can concentrate on running your business. Check out our Web site or call (847) 895-2565.

Consider Respondents When Using Rating Scale Questions in Surveys

July 13, 2010

The art of questionnaire design is full of so many minute details, especially for designing rating scales. The considerations for ratings questions are as normative as they are numerous: how many ratings points to use? Even or odd number of points? Balanced or unbalanced scale? Forced or unforced choice? There are many options, and many researchers default to a five or 10-point rating scale just out of rote or past experience. A poorly – or overly painstakingly – chosen rating scale can lead to biased responses, respondent fatigue and abandonment, and useless results. When deciding on what rating scales to use, it is most important to consider first who your respondents are.

How Many Points?

The number of points to use in a rating scale can be challenging. Use too few points, and you may not get much precise data; use too many, and you may confuse or tire your respondents. Just how many points are appropriate depends on your audience. If your respondents are likely to skew either heavily positive or heavily negative, then you might want to opt for more points, like a seven to 10-point scale. This is because people who are generally positive (or negative) toward your company or product can have different intensities in their attitudes and agreements.

Let’s assume a professional association conducts a survey of its members and asks the question “Overall, how satisfied are you with your membership in our organization?” Consider a five-point scale below:

Generally, if 80% of the association’s members were either “satisfied” or “very satisfied,” it’s really of no value to the association. There’s no way to gauge the intensity of their satisfaction. But, if the association were to use a nine-point scale like this one:

                 

Then those 80% of satisfied members will be more spread out in terms of their satisfaction. For example, if 80% of respondents give a score greater than 5, but only 10% give a score of 9, then the association has an approximation of its hardest core supporters, and then has a better idea of how fluid member satisfaction is. It can then focus on developing programs that graduate members from the six to eight ratings towards a nine.

Also, the lengthier scale can be useful if you’re using this question’s responses as a dependent variable when performing regression analysis, using the responses of other questions to predict responses to this question. These options are not available with the five-point scale. Of course, a seven-point scale might be used instead of a nine-point, depending on the degree of skewness in responses.

How do you Determine the Degree of Respondent Skewness Before Administering the Survey?

It can be hard to know in advance how respondents will rate and whether the ratings will be normally distributed or skewed. There are two ways to find out: past surveys and pilot surveys.

Past Surveys

If the association has conducted this membership satisfaction survey in the past, it might see how respondents have traditionally fallen. If responses have generally been normally distributed, and the association has been using a five-point scale, then the association might want to stay the course.

On the other hand, if the association finds that past surveys are falling lopsidedly on one side of the five-point survey, then it might want to consider increasing the length of the scale. Or, if the association was using a seven or nine-point scale previously and finding sparse responses on both ends (because of the wide lengths), it may choose to collapse the scales down to five points.

Making changes to survey scales based on past survey responses can be problematic, however, if the past surveys are used for benchmarking. Care must be exercised to ensure that the results of the modified scale are easily translatable or imputable to the results of the past survey scales, so that comparability is maintained.

Pilot Surveys

The association can also use a pilot survey as a litmus test for the spread among respondent opinion. If the association is unsure how members will score on certain rating questions, it might send out two or three versions of the same questions to a very small sample of its membership, one testing a five-point, another a seven-point, and the other a nine-point. If results come back with a normal distribution on the five-point, and more sparse and spread out on the seven and nine point scales, then the association knows that a five-point scale is appropriate.

If, on the other hand, the association notices concentration on one end of the scale for all three versions, then it can look at the seven and the nine-point tests. If it sees more sparseness in the nine-point scale, then it may opt for the seven-point scale. Otherwise, it may choose to go with the nine-point scale.

Of course, for the pilot survey to work, each member of the association must have an equal chance of selection. Of those members who do receive the pilot survey, each must also have an equal chance of getting one of the three versions. This ensures a random probability sample which can be generalized to the association’s full membership base.

As you can see, there are lots of considerations involved in constructing a rating scale question. In tomorrow’s blog post, we’ll discuss whether it’s best to use an even or odd number of points, and hence, forced and unforced choices. 

*************************************

Let Analysights Take the Pain out of Survey Design!

Rating scales are but one of the important things you need to consider when designing an effective survey.  If you need to design a survey that gets to the heart of what you need to know in order for your company to achieve marketing success, call on Analysights.  We will take the drudgery out of designing your survey, so you can concentrate on running your business.  Check out our Web site or call (847) 895-2565.

Survey Length Can Impact Findings

July 7, 2010

Last week, I talked about how it might be better to conduct a few short surveys in place of one longer survey. Whether or not the more frequent shorter surveys are better or feasible depends largely on your business problem, urgency, budget, and target respondents. In survey research, for the most part, shorter is almost always preferable to longer.

With more surveys being conducted online, respondent attention spans are very short and patience is in short supply. About 53% of respondents to online surveys say they will devote 10 minutes or less to a survey, according to InsightExpress back in September 2002. Dropout rates tend to increase as surveys get longer. Karen Paterson of Millward Brown found that after 10 minutes, each additional minute a survey takes lowers completion rates by 2%.

Moreover, the number of survey screens (that is, how many times a respondent clicks a “Next” or “Forward” arrow” on the Web survey) can greatly fatigue respondents, especially in business to business (B2B) research. Bill MacElroy demonstrated in the July/August 2000 issue of Quirk’s Marketing Research Review that the dropout rate of B2B respondents increases exponentially as the number of survey screens increases. According to MacElroy, the dropout rate is 7% for an online survey with 10 screens. With 15 screens, the dropout rate is 9%. But at 30 screens, the dropout rate is 30%, and at 45 screens, a whopping 73%!

The question that should enter all of our minds, then, is “what impact does the dropout rate have on both the integrity and findings of the survey?” Generally, respondents who terminate a survey are lumped together with the non-responders. Non-response error has always been a concern of the most dedicated researchers, but quite often is ignored in practice. However, with termination rates growing in the wake of online surveys, ignoring non-response error can cause misleading results.

Karl Irons, in the American Marketing Association’s November 2001 EXPLOR Forum pointed out that the longer the survey, the more inclined respondents who completed the survey were to check the top two boxes on a purchase intent survey:

Hence, when the survey took 14 minutes or more, nearly half of the respondents who completed the survey were likely to choose the top two boxes, indicating that they were most likely or definitely likely to buy, compared with just one-quarter of respondents when the survey was less than 6.5 minutes.

In addition, InsightExpress compared two surveys – a six-minute, 12-question survey and a 21-minute, 23-question survey – in Issue #11 of Today’s Insights. The completion rate of the shorter survey was 31.4%, but only 11% for the longer one. The demographics of the completing respondents weren’t dramatically different, but the results were markedly different: Just under 9% of the respondents in the shorter survey expressed intent to purchase, but almost 25% of those completing the longer survey did! Only four percent of those completing the shorter survey said the product concept appealed to them, compared to nearly 14% for those completing the longer survey!

Why is this? First, when a survey is long, the people who stay to complete it likely have some vested interest in the survey. If the survey is about premium chocolate, for example, a chocolate lover might stick it out through the duration. And someone like the chocoholic is more likely than the average respondent to purchase premium chocolate. Secondly, some respondents don’t want to terminate a survey, either because of the incentive offered or because they want to “be polite.” Hence, they might speed through the survey, just marking the top boxes. In either case, the researcher ends up with biased results.

So how do we rectify this? First and foremost, if you have to do a long survey, tell respondents upfront how long it is expected to take – both with dial-up and with high-speed broadband internet connections. Secondly, make sure there is an appropriate incentive for their participation. Also, make use of a progress bar to let respondents know how far along they are in the survey. Make the survey questions as short, as easy to understand, and as simple as possible. And always test the questionnaire before administration. Have someone else read certain questions, paraphrase them, and try to answer them. And of course, if you have the time and money to do a couple of shorter surveys instead, by all means do so.

*************************

Analysights is now on Facebook!

Analysights is now doing the social media thing! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when other information comes out. Check out our Facebook page!