Archive for the ‘market research’ Category

Are Mail Surveys Useless?

December 21, 2010

These days, most surveys are delivered online. Researchers – especially with the proliferation of consumer panels – can now program a survey, administer it to a relevant sample and get results within a few days, for a relatively low cost per complete. This is a far cry from the day when most surveys were conducted by mail. Mail surveys often needed to be planned out well in advance, had to be kept in the field for several weeks – if not a few months, – required incentives and reminders, and often generated low response rates. Needless to say, mail surveys were also quite costly and could not be changed once in the field.

Most marketing research professionals don’t even consider conducting a survey by mail anymore; most now view mail surveys as obsolete. While I certainly favor online and social media surveys more than mail surveys, I caution not to dismiss mail surveys out of hand. They still have some relevance and, depending on the business objective, may be a better choice than the popular online survey methods.

There are several reasons why you might still consider doing a survey by mail:

  1. Some people still aren’t online. What if you need to survey elderly persons? Or low-income households? Many persons in these groups do not have Internet access, so they cannot be reached online. Assuming they’re not homeless, virtually all of them live at a physical address with a mailbox.

     

  2. Advance permission is often needed to send e-mail surveys. Because of the Controlling the Assault of Non-Solicited Pornography and Marketing (CAN-SPAM) Act of 2003, marketers cannot send promotional e-mail to prospects without permission. While a survey is not promotional, consumers upset about receiving an unsolicited survey might still report it as SPAM, getting you into trouble. This is why most e-mail surveys make use of pre-recruited panels. Mail surveys don’t require such permission.

     

  3. Mailing lists can be obtained to conduct surveys. E-mail address lists cannot be sold. Quite often, you can rent mailing lists to send out surveys.

     

  4. Mail surveys these days may get a better-than-expected response rate. Response rates likely won’t be double-digit, but since few mail surveys are sent these days, those few that are have a better chance of catching the respondent’s attention. And since the respondent isn’t being bombarded with mail surveys, he or she may be more inclined to answer.

     

  5. Mail surveys offer greater perception of anonymity and confidentiality – and hence more truthful responses – than online surveys. Since surveys are administered online, it’s easy to tell who didn’t respond. When you send a respondent a reminder e-mail, the respondent knows his or her lack of response is known. This may lead him or her to feel that the answers he/she gives are also traceable back to him/her. As a result, he or she may be less-inclined to respond truthfully, let alone respond. Although tracking mechanisms have been placed on mail surveys, they’re not as easily discernable as they are for online surveys.

While online surveys appear to be the preferred survey method, there are still times when mail surveys are the better means of data collection. Sometimes, survey projects need to be multimodal in order to achieve a representative sample. Just because online surveys are faster and cheaper than mail surveys, you must consider the value of the insights each mode promises to bring to your business objective.

Insight Central Resumes Week of January 3, 2011!

In observance of the Christmas and New Years, Insight Central will resume the week of January 3, 2011.  We here at Analysights wish you and your family and friends a very Merry Christmas, and a Happy, Healthy, and Prosperous New Year! 

*************************

Be Sure to Follow us on Facebook and Twitter !

Thanks to all of you, Analysights now has more than 200 fans on Facebook … and we’d love more! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! And if you like us that much, please also pass these posts on to your friends who like forecasting and invite them to “Like” Analysights! By “Like-ing” us on Facebook, you and they will be informed every time a new blog post has been published, or when new information comes out. Check out our Facebook page! You can also follow us on Twitter. Thanks for your help!

Advertisements

Forecast Friday Topic: Leading Indicators and Surveys of Expectations

December 9, 2010

(Thirty-second in a series)

Most of the forecasting methods we have discussed so far deal with generating forecasts for a steady-state scenario. Yet the nature of the business cycle is such that there are long periods of growth, long periods of declines, and periods of plateau. Many managers and planners would love to know how to spot the moment when things are about to change for better or worse. Spotting these turning points can be difficult given standard forecasting procedures; yet being able to identify when business activity is going to enter a prolonged period of expansion or a protracted decline can greatly enhance managerial and organizational planning. Two of the most common ways managers anticipate turning points in a time series include leading economic indicators and surveys of expectations. This post discusses both.

Leading Economic Indicators

Nobody has a crystal ball. Yet, some time series exhibit patterns that foreshadow economic activity to come. Quite often, when activity turns positive in one time series, months later it triggers an appropriate response in the broader economy. When movements in a time series seem to anticipate coming economic activity, the time series is said to be a leading economic indicator. When a time series moves in tandem with economic activity, the time series is said to be a coincident economic
indicator; and when movements within a particular time series trails economic activity, the time series is said to be a lagging indicator. Economic indicators are nothing new. The ancient Phoenicians, whose empire was built on trading, often used the number of ships arriving in port as an indicator of trading and economic activity.

Economic indicators can be procyclic – that is they increase as economic activity increases and decrease when economic activity decreases; or countercyclic – meaning they decline when the economy is improving or increase when the economy is declining; or they can be acyclic, having little or no correlation at all with the broader economy. Acyclic indicators are rare, and usually are relegated to subsectors of the economy, to which they are either procyclic or countercyclic.

Since 1961, the U.S. Department of Commerce has published the Survey of Current Business, which details monthly changes in leading indicators. The Conference Board publishes a composite index of 10 leading economic indicators, whose activity suggests changes in economic activity six to nine months into the future. Those 10 components include (reprinted from Investopedia.com):

  1. the average weekly hours worked by manufacturing workers;
  2. the average number of initial applications for unemployment insurance;
  3. the amount of manufacturers’ new orders for consumer goods and materials;
  4. the speed of delivery of new merchandise to vendors from suppliers;
  5. the amount of new orders for capital goods unrelated to defense;
  6. the amount of new building permits for residential buildings;
  7. the S&P 500 stock index;
  8. the inflation-adjusted monetary supply (M2);
  9. the spread between long and short interest rates; and
  10. consumer sentiment

 

These indicators are used to measure changes in the broader economy. Each industry or organization may have its own indicators of business activity. For your business, the choice of the time series(‘) to use as leading indicators and the weight they receive depend on several factors, including:

  1. How well it tends to lead activity in your firm and industry;
  2. How easy the time series is to measure accurately;
  3. How well it conforms to the business cycle;
  4. The time series’ overall performance, not just turning points;
  5. Smoothness – no random blips that give misleading economic cues; and
  6. Availability of data.

Over time, the use of specific indicators, and their significance in forecasting do in fact change. You need to keep an eye on how well the indicators you select continue to foreshadow business activity in your industry.

Surveys of Expectations

Sometimes time series are not available for economic indicators. Changes in technology social structure may not be readily picked up in the existing time series. Other times, consumer sentiment isn’t totally represented in the economic indicators. As a result, surveys are used to measure business optimism, or expectations of the future. Economists and business leaders are often surveyed for their opinions. Sometimes, it’s helpful to know if business leaders anticipate spending more money on equipment purchases in the coming year; whether they plan to hire or lay off workers; or whether they intend to expand. While what respondents to these surveys say and what they really do can be quite different, overall, the surveys can provide some direction as to which way the economy is heading.

Next Forecast Friday Topic: Calendar Effects in Forecasting

Easter can fall in March or April; every four years, February has an extra day; in some years, months have four weekends; others years, five. These nuances can generate huge forecast errors. Next week’s Forecast Friday post discusses these calendar effects in forecasting and what you can do to adjust for them.

*************************

Be Sure to Follow us on Facebook and Twitter !

Thanks to all of you, Analysights now has nearly 200 fans on Facebook … and we’d love more! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! And if you like us that much, please also pass these posts on to your friends who like forecasting and invite them to “Like” Analysights! By “Like-ing” us on Facebook, you and they will be informed every time a new blog post has been published, or when new information comes out. Check out our Facebook page! You can also follow us on Twitter. Thanks for your help!

Sending Surveys to Your Customer List? Building a House Panel May Be Better

November 30, 2010

Many times when companies need information quickly, they conduct brief surveys. A single organization may have hundreds of individual accounts with online survey tools like Zoomerang and SurveyMoney, and each of those employees assigned to such an account may send out surveys of his/her own, depending on the needs of his or her department. The respondents for these surveys is most frequently drawn from the customer list, often pulled from an internal database or from the sales force’s contact management software. This can be a bad idea.

Essentially, what is happening here is that there is no designated owner for marketing research – particularly surveys – in these organizations. As a result, everyone takes it upon himself or herself to collect data via a survey. Since many of these departments have no formal training in questionnaire design, sampling theory, or data analysis, they are bound to get biased, useless results. Moreover, not only does the research process degrade, but customers get confused by incorrectly worded questions and overwhelmed by too many surveys in such a short period of time, causing response rates to go down.

In the November 2010 issue of Quirk’s Marketing Research Review, Jeffrey Henning, the founder and vice president of strategy at Vovici, said that companies must first recognize that customer feedback is an asset and then treat it as such. One way to do that would be to build a house panel – a panel developed internally for the organization’s own use.

To do this, there must be a designated panel owner who is responsible for developing the panel. This should fall within the marketing department, and more precisely, the marketing research group. The panel owner must be charged with understanding the survey needs of each stakeholder; the types of information often sought; the customers who are to be recruited to or excluded from the panel; the information to be captured about each panel member; the maintenance of the panel; and the rules governing how often a panelist is to be surveyed, or which panelists get selected for a particular survey. In addition, all surveys should requisitioned by the interested departments to the marketing research group, who can then ensure best practices using the house panel are being followed and that duplication of effort is minimized if not eliminated.

A house panel can take some time to develop. However, house panels are far preferable to dirty, disparate customer lists, as they preserve customers’ willingness to participate in surveys, ensure that surveys are designed to capture the correct information, and make possible that the insights they generate are actionable.

*************************

Be Sure to Follow us on Facebook and Twitter !

Thanks to all of you, Analysights now has nearly 200 fans on Facebook … and we’d love more! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! And if you like us that much, please also pass these posts on to your friends who like forecasting and invite them to “Like” Analysights! By “Like-ing” us on Facebook, you and they will be informed every time a new blog post has been published, or when new information comes out. Check out our Facebook page! You can also follow us on Twitter. Thanks for your help!

Read All About It: Why Newspapers Need Marketing Analytics

October 26, 2010

After nearly 20 years, I decided to let my subscription to the Wall Street Journal lapse. A few months ago, I did likewise with my longtime subscription to the Chicago Tribune. I didn’t want to end my subscriptions, but as a customer, I felt my voice wasn’t being heard.

Some marketing research and predictive modeling might have enabled the Journal and the Tribune to keep me from defecting. From these efforts, both publications could have spotted my increasing frustration and dissatisfaction and intervened before I chose to vote with my feet.

Long story short, I let both subscriptions lapse for the same reason: chronic unreliable delivery, which was allowed to fester for many years despite numerous calls by me to their customer service numbers about missing and late deliveries.

Marketing Research

Both newspapers could have used marketing research to alert them to the likelihood that I would not renew my subscriptions. They each had lots of primary research readily available to them, without needing to do any surveys: my frequent calls to their customer service department, with the same complaint.

Imagine the wealth of insights both papers could have reaped from this data: they could determine the most common breaches of customer service; by looking at the number of times customers complained about the same issue, they could determine where problems were left unresolved; by breaking down the most frequent complaints by geography, they could determine whether additional delivery persons needed to be hired, or if more training was necessary; and most of all, both newspapers could have also found their most frequent complainers, and reached out to them to see what could be improved.

Both newspapers could have also conducted regular customer satisfaction surveys of their subscribers, asking about overall satisfaction and likelihood of renewing, followed by questions about subscribers’ perceptions about delivery service, quality of reporting, etc. The surveys could have helped the Journal and the Tribune grab the low-hanging fruit by identifying the key elements of service delivery that have the strongest impact on subscriber satisfaction and likelihood of renewal, and then coming up with a strategy to secure satisfaction with those elements.

Predictive Modeling

Another way both newspapers might have been able to intervene and retain my business would have been to predict my likelihood of lapse. This so-called attrition or “churn” modeling is common in industries whose customers are continuity-focused: newspapers and magazines, credit cards, membership associations, health clubs, banks, wireless communications, and broadband cable to name a few.

Attrition modeling (which, incidentally, will be discussed in the next two upcoming Forecast Friday posts) involves developing statistical models comparing attributes and characteristics of current customers with those of former, or churned, customers. The dependent variable being measured is whether a customer churned, so it would be a 1 if “yes” and a 0 if “no.”

Essentially, in building the model, the newspapers would look at several independent, or predictor, variables: customer demographics (e.g., age, income, gender, etc.), frequency of complaints, geography, to name a few. The model would then identify the variables that are the strongest predictors of whether a subscriber will not renew. The model will generate a score between 0 and 1, indicating each subscriber’s probability of not renewing. For example, a probability score of .72 indicates that there is a 72% chance a subscriber will let his/her subscription lapse, and that the newspaper may want to intervene.

In my case, both newspapers might have run such an attrition model to see if number of complaints in the last 12 months was a strong predictor of whether a subscriber would lapse. If that were the case, I would have a high probability of churn, and they could then call me; or, if they found that subscribers who churned were clustered in a particular area, they might be able to look for systemic breakdowns in customer service in that area. Either way, both papers could have found a way to salvage the subscriber relationship.


C-Sat Surveys Can Cause Intra-Organizational Conflict

October 20, 2010

I’ve grown somewhat leery of customer satisfaction surveys in recent years.  While I still believe they can add highly useful information for a company to make improvements to the customer experience, I am also convinced that many companies aren’t doing said research properly.

My reservations aside, regardless of whether a company is doing C-Sat research properly, customer satisfaction surveys can also cause intra-organizational friction and conflict.  Because of the ways departments are incentivized and compensated, some will benefit more than others.  Moreover, because many companies either don’t  link their desired financial and operational outcomes – or don’t link them well enough – to the survey, many departments can claim that the research isn’t working.  C-Sat research is fraught with inter-departmental conflict because companies are conducting it with vague objectives and rewarding – or punishing – departments for their ability or inability to meet those vague objectives.

The key to reducing the conflict caused by C-Sat surveys is to have all affected departments share in framing the objectives.  Before the survey is even designed, all parties should have an idea of what is going to be measured – whether it is repeat business, reduced complaints, shorter customer waiting times – and what they will all be accountable for.  Stakeholders should also work together to see how – or if – they can link the survey’s results to financial and operational performance.  And the stakeholders should be provided information, training, and guidelines to aid their managerial actions in response to the survey’s results.