Posts Tagged ‘market research’

Read All About It: Why Newspapers Need Marketing Analytics

October 26, 2010

After nearly 20 years, I decided to let my subscription to the Wall Street Journal lapse. A few months ago, I did likewise with my longtime subscription to the Chicago Tribune. I didn’t want to end my subscriptions, but as a customer, I felt my voice wasn’t being heard.

Some marketing research and predictive modeling might have enabled the Journal and the Tribune to keep me from defecting. From these efforts, both publications could have spotted my increasing frustration and dissatisfaction and intervened before I chose to vote with my feet.

Long story short, I let both subscriptions lapse for the same reason: chronic unreliable delivery, which was allowed to fester for many years despite numerous calls by me to their customer service numbers about missing and late deliveries.

Marketing Research

Both newspapers could have used marketing research to alert them to the likelihood that I would not renew my subscriptions. They each had lots of primary research readily available to them, without needing to do any surveys: my frequent calls to their customer service department, with the same complaint.

Imagine the wealth of insights both papers could have reaped from this data: they could determine the most common breaches of customer service; by looking at the number of times customers complained about the same issue, they could determine where problems were left unresolved; by breaking down the most frequent complaints by geography, they could determine whether additional delivery persons needed to be hired, or if more training was necessary; and most of all, both newspapers could have also found their most frequent complainers, and reached out to them to see what could be improved.

Both newspapers could have also conducted regular customer satisfaction surveys of their subscribers, asking about overall satisfaction and likelihood of renewing, followed by questions about subscribers’ perceptions about delivery service, quality of reporting, etc. The surveys could have helped the Journal and the Tribune grab the low-hanging fruit by identifying the key elements of service delivery that have the strongest impact on subscriber satisfaction and likelihood of renewal, and then coming up with a strategy to secure satisfaction with those elements.

Predictive Modeling

Another way both newspapers might have been able to intervene and retain my business would have been to predict my likelihood of lapse. This so-called attrition or “churn” modeling is common in industries whose customers are continuity-focused: newspapers and magazines, credit cards, membership associations, health clubs, banks, wireless communications, and broadband cable to name a few.

Attrition modeling (which, incidentally, will be discussed in the next two upcoming Forecast Friday posts) involves developing statistical models comparing attributes and characteristics of current customers with those of former, or churned, customers. The dependent variable being measured is whether a customer churned, so it would be a 1 if “yes” and a 0 if “no.”

Essentially, in building the model, the newspapers would look at several independent, or predictor, variables: customer demographics (e.g., age, income, gender, etc.), frequency of complaints, geography, to name a few. The model would then identify the variables that are the strongest predictors of whether a subscriber will not renew. The model will generate a score between 0 and 1, indicating each subscriber’s probability of not renewing. For example, a probability score of .72 indicates that there is a 72% chance a subscriber will let his/her subscription lapse, and that the newspaper may want to intervene.

In my case, both newspapers might have run such an attrition model to see if number of complaints in the last 12 months was a strong predictor of whether a subscriber would lapse. If that were the case, I would have a high probability of churn, and they could then call me; or, if they found that subscribers who churned were clustered in a particular area, they might be able to look for systemic breakdowns in customer service in that area. Either way, both papers could have found a way to salvage the subscriber relationship.


Advertisements

Marketing Research in Practice

October 12, 2010

Most of the topics I have written about discuss the concepts of marketing research in theory. Today, I want to give you an overview of how marketing research works in practice. Marketing research from a practical standpoint should be discussed periodically because the realities of business are constantly changing and the ideal approach to research and the feasible approach can be very far apart.

Recently, I submitted a bid to a prospective client, who was looking to conduct a survey from a population that was difficult to reach. My bid came up higher than was expected. The department who was to execute the findings of this survey was on a tight budget. Yet, I had to explain the largest cost driver was hiring a marketing research firm to provide the sample. One faction within the company wanted to move ahead at the price I quoted; another wanted to look for ways to reduce the scope of the study and hence the cost. The tradeoff between cost and scope is often the first issue that emerges in the practice of marketing research.

Much of the practice of marketing research parallels what economists have long referred to as “the basic economic problem:” limited resources against unlimited wants. Thanks to the push for company departments to work cross-functionally, there have never been more stakeholders in the outcome of marketing research, with each function having its own agenda from the outcomes of the research. The scope of the study can expand greatly because of the many stakeholders involved; yet the time and money available for the study are often finite.

Another issue that comes up is the selection of the marketing research vendor. Ideally, a company should retain a vendor who is strong in the type of research methodology that needs to be done. In reality, however, this isn’t always possible. Many marketers don’t deal enough with marketing research vendors in order to know their areas of expertise; many believe that every vendor is the same. That’s hardly the case. Before I started Analysights, I worked for a membership association. The association had conducted an employee satisfaction survey and retained a firm that had conducted several. As part of the project, the employee research firm would compare the ratings to those of other companies’ employees who took a similar survey. However, most of the employers who called on this firm to conduct surveys were financial institutions – banks in particular – and their ratings were not comparable to those of the association. As a result, the peer comparison was useless.

Moreover, picking a vendor who is well-versed in a particular methodology may not be possible because they do it so well, that they charge a premium for the service. Hence, clients are often required to develop second-best solutions.

There are many other political issues that come up in the practice of marketing research, too numerous to list here. The key to remember is that marketing research provides information, and information provides power. The department with control of the information has great power in the organization, which results in less than ideal marketing research outcomes.

To ensure that your marketing research outcomes come as close to ideal, it is necessary to take a series of proactive steps. First, get all the stakeholders together. Without concern for money and time, the stakeholders as a group should determine the objectives of the study. Once the objectives are set, the group needs to think through the information they need for those objectives. Collectively, they should distinguish between the “need to know” and the “nice to know,” information and first go with the former. Generally, about 20% of the findings you generate will provide nearly 80% of the actionable information you need. It’s always best to start with a study design whose results provide the greatest amount of relevant, actionable information at the smallest scope possible.

Once the stakeholders are on board for the objectives and the information they must obtain for the objectives, then there should be some agreement on the tradeoffs between the cost of executing the research, the sophistication of the approach, and the data to be collected. Then timeframe and money should be considered. Once the tradeoffs have been agreed to, the study scope can be adjusted to meet the time allotted for the study and the budget.

Marketing research, in theory, focuses on the approaches and tools for doing marketing research. In practice, however, the marketing research encompasses much more: office politics and culture; time and budget constraints; dealing with organizational power and conflict; and identifying the appropriate political and resource balance for conducting the study.

Rankings – not Ratings – Matter in Customer Satisfaction Research

October 5, 2010

Companies spend countless dollars each year trying to measure and improve customer satisfaction. Much research has indicated that improved customer satisfaction brings about improved sales and share of wallet. Yet, the relationship is a weak one. Despite how satisfied customers say they are in customer satisfaction surveys, nearly 80% of their spending doesn’t relate to their stated satisfaction. Why is that?

In the Fall 2010 issue of Marketing Research, Jan Hofmeyr and Ged Parton of Synovate offer two reasons for this weak relationship between business results and satisfaction: companies don’t measure how their customers feel about competitors, nor do they recognize that they should be concerning themselves with the company’s rank, not its rating. For these reasons, the authors argue, models of what drives customer share of wallet offer little confidence.

Hofmeyr and Parton suggest some ways companies can make these improvements. Companies can start by getting ratings of the competition from the same respondent. If, for example, you are asking your customers to rate your company on a set of attributes that you believe are part of their customer satisfaction experience, if one customer gives a rating of “9” on a 10-point satisfaction scale, and another gives a rating of “8,” you are naturally inclined to treat the first customer as more likely to return and do business with you in the future. But that is only one piece of the puzzle, the authors say. What if you ask your customers to also rate your competition on those same attributes? What if the first customer assigns a competitor a “10” and the second customer a “7”? Basically what happens is that the first customer is very satisfied with your company, but even more satisfied with your competitor; the second customer may not be as satisfied with your company as the first customer, but he/she is most satisfied with your company over the competition. You’d probably want to spend more time with the one who gave the “8” rating.

In this example, the authors are essentially turning ratings into rankings. The ranking, not the rating, the author’s say, is the key to increased share of wallet. Hofmeyr and Parton’s research showed that if a customer shopped predominantly at two retailers, regardless of rating, as long as a customer rated one retailer higher than the other, then the top ranked retailer got an average of between 59% and 68% share of the customer’s wallet, while the lower ranked retailer got just 32% on average. If a customer shopped at three retailers, the pattern was similar: the top-ranked retailer got as much as a 58% share of the customer’s wallet; the second-place retailer, 25%, and the lowest ranked, 17%.

While it is important to have customers rate your company on satisfaction, it is just as important to have them rate your competition on the same evoked set and then order and rescale the ratings so that you can see where your company stands. By ranking your company with respect to your competition, you can much more easily determine gaps between satisfaction expectations and delivery so that you can increase share of wallet.

Was Marketing Research Absent from Liz Claiborne’s Strategy to Target Younger Consumers?

August 17, 2010

Yesterday’s Wall Street Journal reported that Liz Claiborne’s efforts to appeal to younger female consumers may have been the company’s downfall. This month, J. C. Penney will launch an exclusive line of Liz Claiborne clothing, home and accessories. As part of the agreement, Claiborne cedes control of production and marketing and converts the label into a mass market line in exchange for royalties, the article reported. In five years, Penney also has the option to buy U.S. rights to the Liz Claiborne name.

This may well be the concluding chapter in what appears to have been a failed attempt by Liz Claiborne to broaden its appeal to younger women. Apparently, Claiborne realized correctly it would need to move to a younger consumer, as most of its customers had been working age Baby Boomers, who have begun to retire. However, the Wall Street Journal indicates that its efforts to target the younger female consumer actually did more to harm the brand.

In trying to appeal to the younger crowd, Liz Claiborne nixed, sold off, or licensed out tried and true lines; it changed designs so much that it confused its existing customer base; it introduced lower priced items, eroding its appeal as a high-end brand; and it alienated its long-term relationship with Macy’s.

As I read the article, I couldn’t help asking myself whether Liz Claiborne did its homework. I don’t know whether Claiborne did or didn’t do marketing research, but deciding to pursue a new target market requires extensive marketing research, because so many mistakes can be made because of unaided judgment. Among other things, it is important to have surveyed the younger female shoppers to understand what they needed for workplace casual attire; and to have looked for common ground between existing product lines and the new, emerging fashions that the younger crowd was embracing. Most likely, the research would have led Claiborne to develop lines that were new enough to appeal to the younger working woman, but traditional enough to maintain loyalty with its existing boomer customer. If the research showed that the younger women wanted something drastically different in the way of style, then Claiborne could have used that information to develop a completely different line (likely by launching a whole different brand) aimed at those preferences.

When appealing to a new target market, it is also important to do pricing research. Surely younger consumers don’t have the discretionary income that older ones do. But that doesn’t necessarily mean a company should introduce lower-priced apparel. As Van Westendorp pricing theory suggests, a price can communicate one of four things to consumers: a good buy, a luxury item, an overpriced item, or a cheap, low-quality item. I could only wonder whether the introduction of lower priced merchandise might have led consumers to believe the newer lines of Liz Claiborne were of lower quality.

Companies that don’t conduct marketing research – or conduct it inadequately – increase their risk of failure, declining sales, customer defections, and increased competition.

*************************

Help us Reach 200 Fans on Facebook!

Thanks to all of you, Analysights now has more than 150 Facebook fans! We had hoped to get up to 200 fans by this past Friday, but weren’t so lucky. Can you help us out? If you like Forecast Friday – and our other posts – then we want you to “Like” us on Facebook! And if you like us that much, please also pass these posts on to your friends who like Insight Central and invite them to “Like” Analysights! By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when new information comes out. Check out our Facebook page! You can also follow us on Twitter. Thanks for your help!

Too Many Cooks Can Even Spoil the Marketing Research Broth

August 11, 2010

In yesterday’s blog post, we discussed the importance of sharing research findings with other stakeholders in your organization. Today, we’re going to discuss how that can go too far, by trying to accommodate too many constituencies within the organization in the design of our survey. It is very easy to fall into the “too many cooks” problem of marketing research. Sometimes we have budget constraints and need to gather as much information from a survey to satisfy all the stakeholders involved. Other times, the department commissioning the survey is not the same as the department funding it, so the head of the latter department will see to it that the survey also serves some of his or her objectives.

The problem with involving too many departments in the planning and design of a survey is that it will create considerable infighting and result in a “satisficing” survey, that asks several questions, some tailored to each of the company’s internal constituents. As a result, the surveys are overly long, cumbersome, and lack a coherent focus. Quite often, these surveys result in respondent fatigue, unreliable responses, and biased results.

When your company is faced with several departments needing to share a survey for information, the best thing to do would be to first understand the company’s overall objectives for a survey. Then talk with the different departments about those objectives and understand what their needs are. Get them to prioritize their needs. Then once you understand the importance of each topic or issue to each department, try to match the most important ones back with the general objectives of the survey. Then talk with all departments together and prioritize those information needs that are most in need to match the company’s general objectives. It helps to have senior management providing top-down support for this collaboration. Stress to each department that you may not be able to get all the information they desire right away, but the quality and usefulness of the data you collect is more important than the quantity.

 *************************

If you Like Our Posts, Then “Like” Us on Facebook and Twitter!

If you want to get more helpful tips like those offered in today’s Insight Central post, then be sure to check out Analysights’ Facebook page and “Like” us! By “Like-ing” us on Facebook, you’ll be informed each time a new blog post has been published, or when other information comes out. Check out our Facebook page! You can also follow us on Twitter.