Posts Tagged ‘customer satisfaction’

Read All About It: Why Newspapers Need Marketing Analytics

October 26, 2010

After nearly 20 years, I decided to let my subscription to the Wall Street Journal lapse. A few months ago, I did likewise with my longtime subscription to the Chicago Tribune. I didn’t want to end my subscriptions, but as a customer, I felt my voice wasn’t being heard.

Some marketing research and predictive modeling might have enabled the Journal and the Tribune to keep me from defecting. From these efforts, both publications could have spotted my increasing frustration and dissatisfaction and intervened before I chose to vote with my feet.

Long story short, I let both subscriptions lapse for the same reason: chronic unreliable delivery, which was allowed to fester for many years despite numerous calls by me to their customer service numbers about missing and late deliveries.

Marketing Research

Both newspapers could have used marketing research to alert them to the likelihood that I would not renew my subscriptions. They each had lots of primary research readily available to them, without needing to do any surveys: my frequent calls to their customer service department, with the same complaint.

Imagine the wealth of insights both papers could have reaped from this data: they could determine the most common breaches of customer service; by looking at the number of times customers complained about the same issue, they could determine where problems were left unresolved; by breaking down the most frequent complaints by geography, they could determine whether additional delivery persons needed to be hired, or if more training was necessary; and most of all, both newspapers could have also found their most frequent complainers, and reached out to them to see what could be improved.

Both newspapers could have also conducted regular customer satisfaction surveys of their subscribers, asking about overall satisfaction and likelihood of renewing, followed by questions about subscribers’ perceptions about delivery service, quality of reporting, etc. The surveys could have helped the Journal and the Tribune grab the low-hanging fruit by identifying the key elements of service delivery that have the strongest impact on subscriber satisfaction and likelihood of renewal, and then coming up with a strategy to secure satisfaction with those elements.

Predictive Modeling

Another way both newspapers might have been able to intervene and retain my business would have been to predict my likelihood of lapse. This so-called attrition or “churn” modeling is common in industries whose customers are continuity-focused: newspapers and magazines, credit cards, membership associations, health clubs, banks, wireless communications, and broadband cable to name a few.

Attrition modeling (which, incidentally, will be discussed in the next two upcoming Forecast Friday posts) involves developing statistical models comparing attributes and characteristics of current customers with those of former, or churned, customers. The dependent variable being measured is whether a customer churned, so it would be a 1 if “yes” and a 0 if “no.”

Essentially, in building the model, the newspapers would look at several independent, or predictor, variables: customer demographics (e.g., age, income, gender, etc.), frequency of complaints, geography, to name a few. The model would then identify the variables that are the strongest predictors of whether a subscriber will not renew. The model will generate a score between 0 and 1, indicating each subscriber’s probability of not renewing. For example, a probability score of .72 indicates that there is a 72% chance a subscriber will let his/her subscription lapse, and that the newspaper may want to intervene.

In my case, both newspapers might have run such an attrition model to see if number of complaints in the last 12 months was a strong predictor of whether a subscriber would lapse. If that were the case, I would have a high probability of churn, and they could then call me; or, if they found that subscribers who churned were clustered in a particular area, they might be able to look for systemic breakdowns in customer service in that area. Either way, both papers could have found a way to salvage the subscriber relationship.


Advertisements

C-Sat Surveys Can Cause Intra-Organizational Conflict

October 20, 2010

I’ve grown somewhat leery of customer satisfaction surveys in recent years.  While I still believe they can add highly useful information for a company to make improvements to the customer experience, I am also convinced that many companies aren’t doing said research properly.

My reservations aside, regardless of whether a company is doing C-Sat research properly, customer satisfaction surveys can also cause intra-organizational friction and conflict.  Because of the ways departments are incentivized and compensated, some will benefit more than others.  Moreover, because many companies either don’t  link their desired financial and operational outcomes – or don’t link them well enough – to the survey, many departments can claim that the research isn’t working.  C-Sat research is fraught with inter-departmental conflict because companies are conducting it with vague objectives and rewarding – or punishing – departments for their ability or inability to meet those vague objectives.

The key to reducing the conflict caused by C-Sat surveys is to have all affected departments share in framing the objectives.  Before the survey is even designed, all parties should have an idea of what is going to be measured – whether it is repeat business, reduced complaints, shorter customer waiting times – and what they will all be accountable for.  Stakeholders should also work together to see how – or if – they can link the survey’s results to financial and operational performance.  And the stakeholders should be provided information, training, and guidelines to aid their managerial actions in response to the survey’s results.

Former Customers Can Be Goldmine – Both in Marketing Research and Winback Sales

August 24, 2010

The other day, I stumbled across this May 28, 2010 blog post from MySmallBusinessMentor.com, which discussed how to re-activate former customers. While you should definitely reach out to former customers and try to get them to buy again, your former customers can also provide a wealth of information from a marketing research and process improvement standpoint.

If a customer has lapsed for, say a 90- or 180-day period, or a customer who used to buy once a month is now only buying every other month, reach out to that customer and mention that you noticed he/she isn’t frequenting your business as much, and ask if there’s anything with your company that they aren’t getting, or would like to see. It could be that they’re not happy with the product, or they found a similar, less expensive product from a competitor. Maybe they’ve “outgrown” your company’s products; or maybe they lost their job and can no longer afford it, whatever. You won’t know unless you ask.

For the purposes of marketing research, a lapsed customer can be more valuable than a loyal customer, especially when you consider that acquiring a new customer is six times more costly than retaining an existing customer. Taking the time to hear out a former customer can help you take corrective action to prevent other customer defections, improve your practices and product benefits, and even win back your lost customers.

*************************

Help us Reach 200 Fans on Facebook!

Thanks to all of you, Analysights now has more than 160 Facebook fans! We had hoped to get up to 200 fans by this past Friday, but weren’t so lucky. Can you help us out? If you like Forecast Friday – and our other posts – then we want you to “Like” us on Facebook! And if you like us that much, please also pass these posts on to your friends who like Insight Central and invite them to “Like” Analysights! By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when new information comes out. Check out our Facebook page! You can also follow us on Twitter. Thanks for your help!

Help! Customer Satisfaction is High But Sales are Down!

July 28, 2010

Customer satisfaction measurement has been of great interest to service organizations for some years now. Nearly every industry that is both highly competitive and heavily customer-facing – like restaurants, hotels, and banks – know that a poor customer experience can result in lost future sales to the competition. As a result, these service-oriented businesses make every effort to keep their ear open to the voice of the customer. Indeed, customer satisfaction surveys proliferate – I once received five in a single week – as company after company strives to hear that customer voice.

And the effort may be futile. This isn’t to say that measuring customer satisfaction isn’t important – most certainly it is. But many companies may be overdoing it. In fact, some companies are seeing negative correlations between customer satisfaction and repeat business! Is this happening to you?

Reasons Why Satisfaction Scores and Sales Don’t Sync

If your customers are praising you in satisfaction surveys but you’re seeing no improvement in sales and repeat business, it could be for one or more of the following reasons:

You’re not Asking the Question Right

Often, a disparity between survey results and actual business results can be attributed to the two measuring different things. If you simply ask, “Overall, how satisfied were you with your stay at XYZ Hotel,” it only tells you about their current experience. If 80 percent of your respondents indicate “Satisfied” or “Very Satisfied,” you only get information about their attitudes. Then you compare satisfaction scores to either total sales or repeat sales from quarter to quarter. And you find either no correlation or a negative correlation. Why? Because the survey question measured only their perceived satisfaction, while the business results measured sales.

On the other hand, if you were to ask the question: “How likely are you to return to XYZ Hotel,” or “How likely are you to recommend XYZ Hotel to a friend or relative,” you might get a better match between responses and business outcomes.

Only Your Happiest Customers Are Responding

Another reason satisfaction scores may be high while sales are declining is because only your most loyal customers are taking the time to complete your survey. Your most loyal customers might have been trained to complete these surveys because they have been spoiled with special incentives because of their frequent patronage, and hence get better treatment than most customers.

Another, more dangerous, reason your happiest customers may be the only respondents is because the distribution of the survey is “managed,” being sent only to the people most likely to give high scores. There is a great risk of this bias in organizations where top executives’ compensation is tied to customer satisfaction scores.

Respondents Aren’t Telling the Truth

As much as we hate to admit, we’re not as honest as we claim to be. This is especially true in surveys. Entire books could be written on respondent honesty (or lack thereof). There are several reasons respondents don’t give truthful answers about their satisfaction. One obvious reason is courtesy; some just don’t like to give negative feedback. Still, even with the promise of confidentiality, respondents worry that if they give a poor rating, they’ll receive a phone call from the business’ representative, which they aren’t comfortable taking.

Survey incentives – if not carefully structured – can also lead to untruthful respondents. If you offer respondents a chance to win a drawing in exchange for completing your customer satisfaction survey, they may lie and say positive things about their experience, in the hopes that it would increase their odds of winning the prize.

You’re Hearing Your Customer but Not Really Listening

In many cases, your customers might say one thing, but really mean another. The customer could be quite satisfied on the whole, but there might be one or two smaller things, that if unchecked, can reduce the customer’s likelihood of repeat business. For example, if you sell clothing online, but not shoes, and your customer doesn’t find out until after loading everything else into the online shopping cart, assuming he/she doesn’t abandon the cart, the customer completes the order for the clothes he or she wants. When the customer gets the survey, he or she might indicate being very satisfied with the order he/she executed. But deep down, that same customer might not have liked that your online store doesn’t sell shoes. Whether or not the customer indicates the issue about the shoes, the next time he/she wants to buy clothes online, the customer may remember that you don’t sell shoes and choose to place the entire order with a competitor who does.

How Can We Remedy This Disparity?

There are a few ways we can remedy these situations. First, make sure the questions you ask reflect your business goals. If you want satisfied customers to return, be sure to ask how likely they are to return. Then measure the scores against actual repeat business. If you want satisfied customers to recommend your business to a friend, make sure you ask how likely they are to do so and then measure that against referrals. Compare apples to apples.

Second, reduce incentives for bias. Ideally, no executive’s compensation should be tied to survey ratings. Instead, tie compensation to actual results. If compensation must be tied to survey results, then by all means make sure the survey is administered by employees with no vested interest in the outcome of the survey. Also, make sure that your entire list of people to survey comes from similarly disinterested employees of the organization.

Third, encourage non-loyal customers to participate. You might create a separate survey for your most loyal customers. For the non-loyal customers, make sure you have ways to encourage them to respond. Whether it’s through an appropriate incentive (say a coupon for a future visit), or through friendly requests, let your non-loyal customers know you still care about their feedback.

Fourth, place reliability checks in your survey. Ask the same question in two ways (positive and negative) or phrase it slightly differently and compare the results. In the former example, you would expect the answers to be on opposite ends of the rating scale. In the latter, you would expect consistency of responses on the same end of the scale. This helps you determine whether respondents are being truthful.

Finally, be proactive. In the example of your online clothing store, you might have the foresight to realize that your decision not to sell shoes may impact satisfaction and future business. So you might be upfront about it, but at the same time, offer a link to a cooperating online retailer who does sell shoes, and allow the customer to order shoes from that retailer using the same shopping cart. That may keep the customer’s satisfaction high and increase his/her likelihood of future business.


 

*************************

If you Like Our Posts, Then “Like” Us on Facebook and Twitter!

Analysights is now doing the social media thing! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when other information comes out. Check out our Facebook page! You can also follow us on Twitter.

Does the Order of Survey Questions Matter? You Bet!

June 29, 2010

Thanks to online survey tools, the cost of executing a survey has never been cheaper. Online surveys allow companies to ask respondents more questions, ask questions on multiple (though related) topics, and get their results faster and less expensively than was once possible with telephone surveys. But the ability to ask more questions on more topics has meant that the sequence of survey questions must be carefully taken into consideration. While the order of the questions in a survey has always mattered, they are even more crucial now.

Order Bias

When question order is not considered, several problems occur, most notably order bias. Imagine that a restaurant owner was conducting a customer satisfaction survey. With no prior survey background, he creates a short survey, with questions ordered like this:

  1. Please rate the temperature of your entrée.
  2. Please rate the taste of your food.
  3. Please rate the menu selection here.
  4. Please rate the courtesy of your server.
  5. Please rate the service you received.
  6. Please rate your overall experience at this restaurant.

What’s wrong with this line of questioning? Assuming they all have the same answer choices, ranging from “poor” to “excellent,” plenty! First, when there are several questions in sequence with the same rating scales, there’s a great chance a respondent will speed through the survey, providing truthful answers near the beginning of the survey, and less truthful answers further down. By placing the overall satisfaction at the end, the restaurateur is biasing the response to it. Hence, if the respondent had a positive experience to the temperature of his/her food, that might cause a halo effect, making him/her think the taste was also good, as well as the menu selection, etc. Halo effects can also be negative. That first question ends up setting the context in which the respondent views his satisfaction.

On the other hand, if the restaurateur shifted the order of the questionnaire as shown below, he will get more reliable answers:

  1. Please rate your overall experience at this restaurant.
  2. Please rate the menu selection here.
  3. Please rate the temperature of your entrée.
  4. Please rate the taste of your food.
  5. Please rate the service you received.
  6. Please rate the courtesy of your server.

Notice the difference? The restaurateur is starting with the overall satisfaction question, followed by satisfaction with the menu selection. Within the menu selection, the restaurateur asks specifically about temperature and taste of food. Then the restaurateur asks about the service, then specifically about the courtesy of the server. What this process does is to start with the respondent’s overall satisfaction. When a respondent offers an overall rating, he is then asked about a component (either the menu selection or the service) of overall satisfaction, so that the researcher can determine if a low overall satisfaction rating is brought on by a low satisfaction rating with either the menu or service, or both. This leads the respondent to speak truthfully of how each component contributed to his/her satisfaction.

Respondent Confusion/No Coherent Organization

Imagine you had developed a new product and wanted to gauge purchase intent for the new product. There’s a ton of stuff you want to know about: best price to charge, best way to promote the product, where respondents will go to buy it, etc. Many survey neophytes may commingle the pricing, promotion, and distribution questions. This is a mistake! The respondent will become confused and fatigued if there’s no clear organization for your survey. If you are asking a question about those three components, your questionnaire should have three sections. At the start of each section, you should indicate “this section asks you some questions about what you feel the ideal price for this product would be…” or “this section asks you about what features you would like and dislike in this product.” In this fashion, the respondent knows what the line of questioning is and doesn’t feel confused.

Tips for Ordering Survey Questions Effectively

These are just two examples. Essentially, if you want to order your questionnaire for maximum reliability, response, and clarity, remember to:

  1. Start with broad, general questions and move to narrow specific ones. If respondents haven’t formed a general opinion or point of view of your topic, you can start your questionnaire going from specific to general.
  2. As I mentioned in last week’s posts, sensitive questions should be asked late in the survey, after your previous questions have established rapport with the respondent.
  3. Unless the topic is highly sensitive, never start a questionnaire with an open-ended question.
  4. Save demographic and classification questions for the end of the questionnaire, unless you need to ask them in order to screen respondents for taking the survey.
  5. Use chronological sequences in questions when obtaining historical information from a respondent.
  6. Make sure all questions on a topic are complete before moving on to another topic and use transitory statements between the topics, like I described in the prior paragraph.

Much like designing survey questions, the order of the questioning is as much an art as it is a science. Taking time to organize your questions will reward you with results that are reliable and actionable.