Archive for the ‘Primary Research’ Category

Are Mail Surveys Useless?

December 21, 2010

These days, most surveys are delivered online. Researchers – especially with the proliferation of consumer panels – can now program a survey, administer it to a relevant sample and get results within a few days, for a relatively low cost per complete. This is a far cry from the day when most surveys were conducted by mail. Mail surveys often needed to be planned out well in advance, had to be kept in the field for several weeks – if not a few months, – required incentives and reminders, and often generated low response rates. Needless to say, mail surveys were also quite costly and could not be changed once in the field.

Most marketing research professionals don’t even consider conducting a survey by mail anymore; most now view mail surveys as obsolete. While I certainly favor online and social media surveys more than mail surveys, I caution not to dismiss mail surveys out of hand. They still have some relevance and, depending on the business objective, may be a better choice than the popular online survey methods.

There are several reasons why you might still consider doing a survey by mail:

  1. Some people still aren’t online. What if you need to survey elderly persons? Or low-income households? Many persons in these groups do not have Internet access, so they cannot be reached online. Assuming they’re not homeless, virtually all of them live at a physical address with a mailbox.

     

  2. Advance permission is often needed to send e-mail surveys. Because of the Controlling the Assault of Non-Solicited Pornography and Marketing (CAN-SPAM) Act of 2003, marketers cannot send promotional e-mail to prospects without permission. While a survey is not promotional, consumers upset about receiving an unsolicited survey might still report it as SPAM, getting you into trouble. This is why most e-mail surveys make use of pre-recruited panels. Mail surveys don’t require such permission.

     

  3. Mailing lists can be obtained to conduct surveys. E-mail address lists cannot be sold. Quite often, you can rent mailing lists to send out surveys.

     

  4. Mail surveys these days may get a better-than-expected response rate. Response rates likely won’t be double-digit, but since few mail surveys are sent these days, those few that are have a better chance of catching the respondent’s attention. And since the respondent isn’t being bombarded with mail surveys, he or she may be more inclined to answer.

     

  5. Mail surveys offer greater perception of anonymity and confidentiality – and hence more truthful responses – than online surveys. Since surveys are administered online, it’s easy to tell who didn’t respond. When you send a respondent a reminder e-mail, the respondent knows his or her lack of response is known. This may lead him or her to feel that the answers he/she gives are also traceable back to him/her. As a result, he or she may be less-inclined to respond truthfully, let alone respond. Although tracking mechanisms have been placed on mail surveys, they’re not as easily discernable as they are for online surveys.

While online surveys appear to be the preferred survey method, there are still times when mail surveys are the better means of data collection. Sometimes, survey projects need to be multimodal in order to achieve a representative sample. Just because online surveys are faster and cheaper than mail surveys, you must consider the value of the insights each mode promises to bring to your business objective.

Insight Central Resumes Week of January 3, 2011!

In observance of the Christmas and New Years, Insight Central will resume the week of January 3, 2011.  We here at Analysights wish you and your family and friends a very Merry Christmas, and a Happy, Healthy, and Prosperous New Year! 

*************************

Be Sure to Follow us on Facebook and Twitter !

Thanks to all of you, Analysights now has more than 200 fans on Facebook … and we’d love more! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! And if you like us that much, please also pass these posts on to your friends who like forecasting and invite them to “Like” Analysights! By “Like-ing” us on Facebook, you and they will be informed every time a new blog post has been published, or when new information comes out. Check out our Facebook page! You can also follow us on Twitter. Thanks for your help!

Advertisements

Sending Surveys to Your Customer List? Building a House Panel May Be Better

November 30, 2010

Many times when companies need information quickly, they conduct brief surveys. A single organization may have hundreds of individual accounts with online survey tools like Zoomerang and SurveyMoney, and each of those employees assigned to such an account may send out surveys of his/her own, depending on the needs of his or her department. The respondents for these surveys is most frequently drawn from the customer list, often pulled from an internal database or from the sales force’s contact management software. This can be a bad idea.

Essentially, what is happening here is that there is no designated owner for marketing research – particularly surveys – in these organizations. As a result, everyone takes it upon himself or herself to collect data via a survey. Since many of these departments have no formal training in questionnaire design, sampling theory, or data analysis, they are bound to get biased, useless results. Moreover, not only does the research process degrade, but customers get confused by incorrectly worded questions and overwhelmed by too many surveys in such a short period of time, causing response rates to go down.

In the November 2010 issue of Quirk’s Marketing Research Review, Jeffrey Henning, the founder and vice president of strategy at Vovici, said that companies must first recognize that customer feedback is an asset and then treat it as such. One way to do that would be to build a house panel – a panel developed internally for the organization’s own use.

To do this, there must be a designated panel owner who is responsible for developing the panel. This should fall within the marketing department, and more precisely, the marketing research group. The panel owner must be charged with understanding the survey needs of each stakeholder; the types of information often sought; the customers who are to be recruited to or excluded from the panel; the information to be captured about each panel member; the maintenance of the panel; and the rules governing how often a panelist is to be surveyed, or which panelists get selected for a particular survey. In addition, all surveys should requisitioned by the interested departments to the marketing research group, who can then ensure best practices using the house panel are being followed and that duplication of effort is minimized if not eliminated.

A house panel can take some time to develop. However, house panels are far preferable to dirty, disparate customer lists, as they preserve customers’ willingness to participate in surveys, ensure that surveys are designed to capture the correct information, and make possible that the insights they generate are actionable.

*************************

Be Sure to Follow us on Facebook and Twitter !

Thanks to all of you, Analysights now has nearly 200 fans on Facebook … and we’d love more! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! And if you like us that much, please also pass these posts on to your friends who like forecasting and invite them to “Like” Analysights! By “Like-ing” us on Facebook, you and they will be informed every time a new blog post has been published, or when new information comes out. Check out our Facebook page! You can also follow us on Twitter. Thanks for your help!

Read All About It: Why Newspapers Need Marketing Analytics

October 26, 2010

After nearly 20 years, I decided to let my subscription to the Wall Street Journal lapse. A few months ago, I did likewise with my longtime subscription to the Chicago Tribune. I didn’t want to end my subscriptions, but as a customer, I felt my voice wasn’t being heard.

Some marketing research and predictive modeling might have enabled the Journal and the Tribune to keep me from defecting. From these efforts, both publications could have spotted my increasing frustration and dissatisfaction and intervened before I chose to vote with my feet.

Long story short, I let both subscriptions lapse for the same reason: chronic unreliable delivery, which was allowed to fester for many years despite numerous calls by me to their customer service numbers about missing and late deliveries.

Marketing Research

Both newspapers could have used marketing research to alert them to the likelihood that I would not renew my subscriptions. They each had lots of primary research readily available to them, without needing to do any surveys: my frequent calls to their customer service department, with the same complaint.

Imagine the wealth of insights both papers could have reaped from this data: they could determine the most common breaches of customer service; by looking at the number of times customers complained about the same issue, they could determine where problems were left unresolved; by breaking down the most frequent complaints by geography, they could determine whether additional delivery persons needed to be hired, or if more training was necessary; and most of all, both newspapers could have also found their most frequent complainers, and reached out to them to see what could be improved.

Both newspapers could have also conducted regular customer satisfaction surveys of their subscribers, asking about overall satisfaction and likelihood of renewing, followed by questions about subscribers’ perceptions about delivery service, quality of reporting, etc. The surveys could have helped the Journal and the Tribune grab the low-hanging fruit by identifying the key elements of service delivery that have the strongest impact on subscriber satisfaction and likelihood of renewal, and then coming up with a strategy to secure satisfaction with those elements.

Predictive Modeling

Another way both newspapers might have been able to intervene and retain my business would have been to predict my likelihood of lapse. This so-called attrition or “churn” modeling is common in industries whose customers are continuity-focused: newspapers and magazines, credit cards, membership associations, health clubs, banks, wireless communications, and broadband cable to name a few.

Attrition modeling (which, incidentally, will be discussed in the next two upcoming Forecast Friday posts) involves developing statistical models comparing attributes and characteristics of current customers with those of former, or churned, customers. The dependent variable being measured is whether a customer churned, so it would be a 1 if “yes” and a 0 if “no.”

Essentially, in building the model, the newspapers would look at several independent, or predictor, variables: customer demographics (e.g., age, income, gender, etc.), frequency of complaints, geography, to name a few. The model would then identify the variables that are the strongest predictors of whether a subscriber will not renew. The model will generate a score between 0 and 1, indicating each subscriber’s probability of not renewing. For example, a probability score of .72 indicates that there is a 72% chance a subscriber will let his/her subscription lapse, and that the newspaper may want to intervene.

In my case, both newspapers might have run such an attrition model to see if number of complaints in the last 12 months was a strong predictor of whether a subscriber would lapse. If that were the case, I would have a high probability of churn, and they could then call me; or, if they found that subscribers who churned were clustered in a particular area, they might be able to look for systemic breakdowns in customer service in that area. Either way, both papers could have found a way to salvage the subscriber relationship.


C-Sat Surveys Can Cause Intra-Organizational Conflict

October 20, 2010

I’ve grown somewhat leery of customer satisfaction surveys in recent years.  While I still believe they can add highly useful information for a company to make improvements to the customer experience, I am also convinced that many companies aren’t doing said research properly.

My reservations aside, regardless of whether a company is doing C-Sat research properly, customer satisfaction surveys can also cause intra-organizational friction and conflict.  Because of the ways departments are incentivized and compensated, some will benefit more than others.  Moreover, because many companies either don’t  link their desired financial and operational outcomes – or don’t link them well enough – to the survey, many departments can claim that the research isn’t working.  C-Sat research is fraught with inter-departmental conflict because companies are conducting it with vague objectives and rewarding – or punishing – departments for their ability or inability to meet those vague objectives.

The key to reducing the conflict caused by C-Sat surveys is to have all affected departments share in framing the objectives.  Before the survey is even designed, all parties should have an idea of what is going to be measured – whether it is repeat business, reduced complaints, shorter customer waiting times – and what they will all be accountable for.  Stakeholders should also work together to see how – or if – they can link the survey’s results to financial and operational performance.  And the stakeholders should be provided information, training, and guidelines to aid their managerial actions in response to the survey’s results.

Survey Question Dos and Don’ts Redux

October 19, 2010

This past summer, I published a series of posts for Insight Central about effective questionnaire design.  It cannot be stressed enough that survey questions must carefully be thought out in order to obtain information you can act on.  In this month’s issue of Quirk’s Marketing Research Review, Brett Plummer of HSM Group, Ltd. reiterates many of the points made in my earlier posts.

Plummer’s article (you’ll need to enter the code 20101008 in the Article ID blank) provides a series of dos and don’ts when writing survey questions. I’ll summarize them here:

Do:

  1. Keep your research objectives in mind;
  2. Consider the best type of question to ask for each question;
  3. Think about how your going to analyze your data;
  4. Make sure all valid response options are included; and
  5. Consider where you place each question within your survey.

Don’t:

  1. Create confusing or vague questions;
  2. Forget to ensure that the response options to questions are appropriate, thorough, and not overlapping;
  3. Ask leading questions; and
  4. Ask redundant questions.

Plummer does a good job at reminding of the importance of these guidelines and points out that effective survey questions are the key to an organization’s obtaining the highest quantity and quality of actionable information, and thus maximizing its research investment.