Archive for the ‘customer satisfaction’ Category

Read All About It: Why Newspapers Need Marketing Analytics

October 26, 2010

After nearly 20 years, I decided to let my subscription to the Wall Street Journal lapse. A few months ago, I did likewise with my longtime subscription to the Chicago Tribune. I didn’t want to end my subscriptions, but as a customer, I felt my voice wasn’t being heard.

Some marketing research and predictive modeling might have enabled the Journal and the Tribune to keep me from defecting. From these efforts, both publications could have spotted my increasing frustration and dissatisfaction and intervened before I chose to vote with my feet.

Long story short, I let both subscriptions lapse for the same reason: chronic unreliable delivery, which was allowed to fester for many years despite numerous calls by me to their customer service numbers about missing and late deliveries.

Marketing Research

Both newspapers could have used marketing research to alert them to the likelihood that I would not renew my subscriptions. They each had lots of primary research readily available to them, without needing to do any surveys: my frequent calls to their customer service department, with the same complaint.

Imagine the wealth of insights both papers could have reaped from this data: they could determine the most common breaches of customer service; by looking at the number of times customers complained about the same issue, they could determine where problems were left unresolved; by breaking down the most frequent complaints by geography, they could determine whether additional delivery persons needed to be hired, or if more training was necessary; and most of all, both newspapers could have also found their most frequent complainers, and reached out to them to see what could be improved.

Both newspapers could have also conducted regular customer satisfaction surveys of their subscribers, asking about overall satisfaction and likelihood of renewing, followed by questions about subscribers’ perceptions about delivery service, quality of reporting, etc. The surveys could have helped the Journal and the Tribune grab the low-hanging fruit by identifying the key elements of service delivery that have the strongest impact on subscriber satisfaction and likelihood of renewal, and then coming up with a strategy to secure satisfaction with those elements.

Predictive Modeling

Another way both newspapers might have been able to intervene and retain my business would have been to predict my likelihood of lapse. This so-called attrition or “churn” modeling is common in industries whose customers are continuity-focused: newspapers and magazines, credit cards, membership associations, health clubs, banks, wireless communications, and broadband cable to name a few.

Attrition modeling (which, incidentally, will be discussed in the next two upcoming Forecast Friday posts) involves developing statistical models comparing attributes and characteristics of current customers with those of former, or churned, customers. The dependent variable being measured is whether a customer churned, so it would be a 1 if “yes” and a 0 if “no.”

Essentially, in building the model, the newspapers would look at several independent, or predictor, variables: customer demographics (e.g., age, income, gender, etc.), frequency of complaints, geography, to name a few. The model would then identify the variables that are the strongest predictors of whether a subscriber will not renew. The model will generate a score between 0 and 1, indicating each subscriber’s probability of not renewing. For example, a probability score of .72 indicates that there is a 72% chance a subscriber will let his/her subscription lapse, and that the newspaper may want to intervene.

In my case, both newspapers might have run such an attrition model to see if number of complaints in the last 12 months was a strong predictor of whether a subscriber would lapse. If that were the case, I would have a high probability of churn, and they could then call me; or, if they found that subscribers who churned were clustered in a particular area, they might be able to look for systemic breakdowns in customer service in that area. Either way, both papers could have found a way to salvage the subscriber relationship.


Advertisements

C-Sat Surveys Can Cause Intra-Organizational Conflict

October 20, 2010

I’ve grown somewhat leery of customer satisfaction surveys in recent years.  While I still believe they can add highly useful information for a company to make improvements to the customer experience, I am also convinced that many companies aren’t doing said research properly.

My reservations aside, regardless of whether a company is doing C-Sat research properly, customer satisfaction surveys can also cause intra-organizational friction and conflict.  Because of the ways departments are incentivized and compensated, some will benefit more than others.  Moreover, because many companies either don’t  link their desired financial and operational outcomes – or don’t link them well enough – to the survey, many departments can claim that the research isn’t working.  C-Sat research is fraught with inter-departmental conflict because companies are conducting it with vague objectives and rewarding – or punishing – departments for their ability or inability to meet those vague objectives.

The key to reducing the conflict caused by C-Sat surveys is to have all affected departments share in framing the objectives.  Before the survey is even designed, all parties should have an idea of what is going to be measured – whether it is repeat business, reduced complaints, shorter customer waiting times – and what they will all be accountable for.  Stakeholders should also work together to see how – or if – they can link the survey’s results to financial and operational performance.  And the stakeholders should be provided information, training, and guidelines to aid their managerial actions in response to the survey’s results.

Rankings – not Ratings – Matter in Customer Satisfaction Research

October 5, 2010

Companies spend countless dollars each year trying to measure and improve customer satisfaction. Much research has indicated that improved customer satisfaction brings about improved sales and share of wallet. Yet, the relationship is a weak one. Despite how satisfied customers say they are in customer satisfaction surveys, nearly 80% of their spending doesn’t relate to their stated satisfaction. Why is that?

In the Fall 2010 issue of Marketing Research, Jan Hofmeyr and Ged Parton of Synovate offer two reasons for this weak relationship between business results and satisfaction: companies don’t measure how their customers feel about competitors, nor do they recognize that they should be concerning themselves with the company’s rank, not its rating. For these reasons, the authors argue, models of what drives customer share of wallet offer little confidence.

Hofmeyr and Parton suggest some ways companies can make these improvements. Companies can start by getting ratings of the competition from the same respondent. If, for example, you are asking your customers to rate your company on a set of attributes that you believe are part of their customer satisfaction experience, if one customer gives a rating of “9” on a 10-point satisfaction scale, and another gives a rating of “8,” you are naturally inclined to treat the first customer as more likely to return and do business with you in the future. But that is only one piece of the puzzle, the authors say. What if you ask your customers to also rate your competition on those same attributes? What if the first customer assigns a competitor a “10” and the second customer a “7”? Basically what happens is that the first customer is very satisfied with your company, but even more satisfied with your competitor; the second customer may not be as satisfied with your company as the first customer, but he/she is most satisfied with your company over the competition. You’d probably want to spend more time with the one who gave the “8” rating.

In this example, the authors are essentially turning ratings into rankings. The ranking, not the rating, the author’s say, is the key to increased share of wallet. Hofmeyr and Parton’s research showed that if a customer shopped predominantly at two retailers, regardless of rating, as long as a customer rated one retailer higher than the other, then the top ranked retailer got an average of between 59% and 68% share of the customer’s wallet, while the lower ranked retailer got just 32% on average. If a customer shopped at three retailers, the pattern was similar: the top-ranked retailer got as much as a 58% share of the customer’s wallet; the second-place retailer, 25%, and the lowest ranked, 17%.

While it is important to have customers rate your company on satisfaction, it is just as important to have them rate your competition on the same evoked set and then order and rescale the ratings so that you can see where your company stands. By ranking your company with respect to your competition, you can much more easily determine gaps between satisfaction expectations and delivery so that you can increase share of wallet.

Help! Customer Satisfaction is High But Sales are Down!

July 28, 2010

Customer satisfaction measurement has been of great interest to service organizations for some years now. Nearly every industry that is both highly competitive and heavily customer-facing – like restaurants, hotels, and banks – know that a poor customer experience can result in lost future sales to the competition. As a result, these service-oriented businesses make every effort to keep their ear open to the voice of the customer. Indeed, customer satisfaction surveys proliferate – I once received five in a single week – as company after company strives to hear that customer voice.

And the effort may be futile. This isn’t to say that measuring customer satisfaction isn’t important – most certainly it is. But many companies may be overdoing it. In fact, some companies are seeing negative correlations between customer satisfaction and repeat business! Is this happening to you?

Reasons Why Satisfaction Scores and Sales Don’t Sync

If your customers are praising you in satisfaction surveys but you’re seeing no improvement in sales and repeat business, it could be for one or more of the following reasons:

You’re not Asking the Question Right

Often, a disparity between survey results and actual business results can be attributed to the two measuring different things. If you simply ask, “Overall, how satisfied were you with your stay at XYZ Hotel,” it only tells you about their current experience. If 80 percent of your respondents indicate “Satisfied” or “Very Satisfied,” you only get information about their attitudes. Then you compare satisfaction scores to either total sales or repeat sales from quarter to quarter. And you find either no correlation or a negative correlation. Why? Because the survey question measured only their perceived satisfaction, while the business results measured sales.

On the other hand, if you were to ask the question: “How likely are you to return to XYZ Hotel,” or “How likely are you to recommend XYZ Hotel to a friend or relative,” you might get a better match between responses and business outcomes.

Only Your Happiest Customers Are Responding

Another reason satisfaction scores may be high while sales are declining is because only your most loyal customers are taking the time to complete your survey. Your most loyal customers might have been trained to complete these surveys because they have been spoiled with special incentives because of their frequent patronage, and hence get better treatment than most customers.

Another, more dangerous, reason your happiest customers may be the only respondents is because the distribution of the survey is “managed,” being sent only to the people most likely to give high scores. There is a great risk of this bias in organizations where top executives’ compensation is tied to customer satisfaction scores.

Respondents Aren’t Telling the Truth

As much as we hate to admit, we’re not as honest as we claim to be. This is especially true in surveys. Entire books could be written on respondent honesty (or lack thereof). There are several reasons respondents don’t give truthful answers about their satisfaction. One obvious reason is courtesy; some just don’t like to give negative feedback. Still, even with the promise of confidentiality, respondents worry that if they give a poor rating, they’ll receive a phone call from the business’ representative, which they aren’t comfortable taking.

Survey incentives – if not carefully structured – can also lead to untruthful respondents. If you offer respondents a chance to win a drawing in exchange for completing your customer satisfaction survey, they may lie and say positive things about their experience, in the hopes that it would increase their odds of winning the prize.

You’re Hearing Your Customer but Not Really Listening

In many cases, your customers might say one thing, but really mean another. The customer could be quite satisfied on the whole, but there might be one or two smaller things, that if unchecked, can reduce the customer’s likelihood of repeat business. For example, if you sell clothing online, but not shoes, and your customer doesn’t find out until after loading everything else into the online shopping cart, assuming he/she doesn’t abandon the cart, the customer completes the order for the clothes he or she wants. When the customer gets the survey, he or she might indicate being very satisfied with the order he/she executed. But deep down, that same customer might not have liked that your online store doesn’t sell shoes. Whether or not the customer indicates the issue about the shoes, the next time he/she wants to buy clothes online, the customer may remember that you don’t sell shoes and choose to place the entire order with a competitor who does.

How Can We Remedy This Disparity?

There are a few ways we can remedy these situations. First, make sure the questions you ask reflect your business goals. If you want satisfied customers to return, be sure to ask how likely they are to return. Then measure the scores against actual repeat business. If you want satisfied customers to recommend your business to a friend, make sure you ask how likely they are to do so and then measure that against referrals. Compare apples to apples.

Second, reduce incentives for bias. Ideally, no executive’s compensation should be tied to survey ratings. Instead, tie compensation to actual results. If compensation must be tied to survey results, then by all means make sure the survey is administered by employees with no vested interest in the outcome of the survey. Also, make sure that your entire list of people to survey comes from similarly disinterested employees of the organization.

Third, encourage non-loyal customers to participate. You might create a separate survey for your most loyal customers. For the non-loyal customers, make sure you have ways to encourage them to respond. Whether it’s through an appropriate incentive (say a coupon for a future visit), or through friendly requests, let your non-loyal customers know you still care about their feedback.

Fourth, place reliability checks in your survey. Ask the same question in two ways (positive and negative) or phrase it slightly differently and compare the results. In the former example, you would expect the answers to be on opposite ends of the rating scale. In the latter, you would expect consistency of responses on the same end of the scale. This helps you determine whether respondents are being truthful.

Finally, be proactive. In the example of your online clothing store, you might have the foresight to realize that your decision not to sell shoes may impact satisfaction and future business. So you might be upfront about it, but at the same time, offer a link to a cooperating online retailer who does sell shoes, and allow the customer to order shoes from that retailer using the same shopping cart. That may keep the customer’s satisfaction high and increase his/her likelihood of future business.


 

*************************

If you Like Our Posts, Then “Like” Us on Facebook and Twitter!

Analysights is now doing the social media thing! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when other information comes out. Check out our Facebook page! You can also follow us on Twitter.

Consider Respondents When Using Rating Scale Questions in Surveys

July 13, 2010

The art of questionnaire design is full of so many minute details, especially for designing rating scales. The considerations for ratings questions are as normative as they are numerous: how many ratings points to use? Even or odd number of points? Balanced or unbalanced scale? Forced or unforced choice? There are many options, and many researchers default to a five or 10-point rating scale just out of rote or past experience. A poorly – or overly painstakingly – chosen rating scale can lead to biased responses, respondent fatigue and abandonment, and useless results. When deciding on what rating scales to use, it is most important to consider first who your respondents are.

How Many Points?

The number of points to use in a rating scale can be challenging. Use too few points, and you may not get much precise data; use too many, and you may confuse or tire your respondents. Just how many points are appropriate depends on your audience. If your respondents are likely to skew either heavily positive or heavily negative, then you might want to opt for more points, like a seven to 10-point scale. This is because people who are generally positive (or negative) toward your company or product can have different intensities in their attitudes and agreements.

Let’s assume a professional association conducts a survey of its members and asks the question “Overall, how satisfied are you with your membership in our organization?” Consider a five-point scale below:

Generally, if 80% of the association’s members were either “satisfied” or “very satisfied,” it’s really of no value to the association. There’s no way to gauge the intensity of their satisfaction. But, if the association were to use a nine-point scale like this one:

                 

Then those 80% of satisfied members will be more spread out in terms of their satisfaction. For example, if 80% of respondents give a score greater than 5, but only 10% give a score of 9, then the association has an approximation of its hardest core supporters, and then has a better idea of how fluid member satisfaction is. It can then focus on developing programs that graduate members from the six to eight ratings towards a nine.

Also, the lengthier scale can be useful if you’re using this question’s responses as a dependent variable when performing regression analysis, using the responses of other questions to predict responses to this question. These options are not available with the five-point scale. Of course, a seven-point scale might be used instead of a nine-point, depending on the degree of skewness in responses.

How do you Determine the Degree of Respondent Skewness Before Administering the Survey?

It can be hard to know in advance how respondents will rate and whether the ratings will be normally distributed or skewed. There are two ways to find out: past surveys and pilot surveys.

Past Surveys

If the association has conducted this membership satisfaction survey in the past, it might see how respondents have traditionally fallen. If responses have generally been normally distributed, and the association has been using a five-point scale, then the association might want to stay the course.

On the other hand, if the association finds that past surveys are falling lopsidedly on one side of the five-point survey, then it might want to consider increasing the length of the scale. Or, if the association was using a seven or nine-point scale previously and finding sparse responses on both ends (because of the wide lengths), it may choose to collapse the scales down to five points.

Making changes to survey scales based on past survey responses can be problematic, however, if the past surveys are used for benchmarking. Care must be exercised to ensure that the results of the modified scale are easily translatable or imputable to the results of the past survey scales, so that comparability is maintained.

Pilot Surveys

The association can also use a pilot survey as a litmus test for the spread among respondent opinion. If the association is unsure how members will score on certain rating questions, it might send out two or three versions of the same questions to a very small sample of its membership, one testing a five-point, another a seven-point, and the other a nine-point. If results come back with a normal distribution on the five-point, and more sparse and spread out on the seven and nine point scales, then the association knows that a five-point scale is appropriate.

If, on the other hand, the association notices concentration on one end of the scale for all three versions, then it can look at the seven and the nine-point tests. If it sees more sparseness in the nine-point scale, then it may opt for the seven-point scale. Otherwise, it may choose to go with the nine-point scale.

Of course, for the pilot survey to work, each member of the association must have an equal chance of selection. Of those members who do receive the pilot survey, each must also have an equal chance of getting one of the three versions. This ensures a random probability sample which can be generalized to the association’s full membership base.

As you can see, there are lots of considerations involved in constructing a rating scale question. In tomorrow’s blog post, we’ll discuss whether it’s best to use an even or odd number of points, and hence, forced and unforced choices. 

*************************************

Let Analysights Take the Pain out of Survey Design!

Rating scales are but one of the important things you need to consider when designing an effective survey.  If you need to design a survey that gets to the heart of what you need to know in order for your company to achieve marketing success, call on Analysights.  We will take the drudgery out of designing your survey, so you can concentrate on running your business.  Check out our Web site or call (847) 895-2565.