Posts Tagged ‘marketing research’

Forecast Friday Topic: Expert Judgment

February 10, 2011

(Thirty-seventh in a series)

Last week, we began our discussion of judgmental forecasting methods, talking about judgmental extrapolation, which required no real understanding of the physical process behind the time series. Today, we will talk about more sophisticated judgmental techniques that are used in subjective forecasting; “sophisticated” only in the sense that the opinion of “experts” is used in trying to predict the future. The three techniques we will discuss are the Jury of Executive Opinion, sales force composite forecasts, and surveys of expectations.

Jury of Executive Opinion

The Jury of Executive Opinion is quite often seen in an organization’s budgeting and strategic planning process. The “jury” is often a group of high-level executives from all areas of the organization – marketing, finance, human resources, manufacturing, etc. – who come together to discuss their respective areas of business and work to come up with a composite forecast of where the organization’s business will be. Each executive shares his/her opinions and weighs and evaluates those of the other executive. After discussion, the executives write down their forecasts, which are then averaged.

One example of the Jury of Executive Opinion takes me back to 1999-2000, when I worked for catalog retailer Hammacher Schlemmer. Hammacher Schlemmer convened a weekly committee to estimate the orders coming in for the next two weeks from each of the active catalogs in circulation. The committee was made up of several marketing personnel, including myself (as I was the forecasting analyst!), and managers from the warehouse, in-bound call center, inventory control, and merchandising. We would begin every Wednesday morning reviewing the number of orders that came in for each active catalog, for the prior week and the first two days of the current week. Armed with that order information, and spreadsheets detailing order history for those catalogs’ prior years, each of us would indicate our orders forecasts for the next several weeks ahead. Our forecasts were then averaged, and we would then submit the composite forecasts to the warehouse and call center to assist with their staffing, and to inventory control to ensure adequate purchasing.

One of the nice things about the Jury of Executive Opinion is its simplicity. Getting executives to sound off is often pretty easy to do. Moreover, incorporating the experiences of a broad group into the forecasting process may enable companies to see the forest beyond the trees.

However, simple and broad-focused as it may be, the Jury of Executive Opinion is not without its flaws. These meetings can be time consuming, for one. Indeed, at Hammacher Schlemmer, during the last three months of the year – when the holiday season was in tow – those weekly meetings could take all morning, as nearly a dozen catalogs could be in circulation. Furthermore, group dynamics may actually lead to unwise consensus forecasts. The group is often at risk of being swayed by the opinions of those members who are most articulate, or with greater seniority or rank within the organization, or just by their own over-optimism. Another problem is that the passage of time makes it difficult to recognize those experts whose opinions were most reliable and whose should be given less weight. As a result, there’s no way to hold any individual member accountable for a forecast. Finally, executives are more comfortable with using their opinions for mid-and longer-range planning than for shorter period-to-period predictions, especially since recent unexpected events can also influence their opinion.

Sales Force Composite Forecasts

When companies have a product that is sold by sales agents in specific territories, it is not uncommon for them to seek the opinions of their sales representatives or branch/territory managers in developing forecasts for each product line. In fact, sales representatives’ opinions can be quite useful, since they are generally close to the customer, and may be able to provide useful insights into purchase intent. Essentially, these companies have their agents develop forecasts for each of the products they sell within a territory. The added benefit of this approach is that a company can develop a forecast for the entire market, as well as for individual territories.

Indeed, when I worked in the market research department of insurance company Bankers Life & Casualty during 1997-1999, we frequently conducted surveys of our sales force and branch managers to understand how many long-term care insurance policies, Medicare Supplement policies, and annuities were being sold within each market, and how much were being lost to the competition. These surveys would provide a read into the market size for each insurance product at both a regional and national level.

While the closeness to the customer is a great advantage of sales force composite surveys, they too have problems. Sales agents have a tendency to be overly optimistic in their forecasts and may set unrealistic goals. In addition, because sales agents are close to the customer, their opinions are likely to be swayed by microeconomic decision purchases, when in fact aggregate sales are often driven by macroeconomic factors. Supplementing sales force composite forecasts with more formal quantitative forecasting methods, if possible, is often recommended.

Surveys of Expectations

We actually covered surveys of expectations in our December 9, 2010 Forecast Friday post, but let me just quickly go through it. Sometimes when data isn’t available for forecasting, companies can conduct surveys to get opinions and expectations. Marketing research in this fashion is often expensive, so often surveys of expectations are used when it is believed they will provide valuable information. Surveys work well for new product development, brand awareness, and market penetration. In our December 9, 2010 Forecast Friday topic, the audience of the expectation survey was mostly executives and other business experts. In this post, the audience is consumers.

NCH Marketing Services, both the leading processor of grocery coupons and a leading coupon promotion firm – and also a former employer of mine – used surveys to obtain information on coupon usage. The company even asked persons how many coupons they took to the store in a typical month. From there, the company would estimate the number of coupons redeemed in the U.S. annually.

Summary

Companies often must rely solely on expert judgment for looking ahead. The Jury of Executive Opinion, sales force composite forecasts, and consumer surveys are just some of the approaches companies can take to predict the future when more formal quantitative methods are either unavailable or unreliable.

Next Forecast Friday Topic: The Delphi Method

********************************************************

Follow us on Facebook and Twitter!

For the latest insights on marketing research, predictive modeling, and forecasting, be sure to check out Analysights on Facebook and Twitter! “Like-ing” us on Facebook and following us on Twitter will allow you to stay informed of each new Insight Central post published, new information about analytics, discussions Analysights will be hosting, and other opportunities for feedback. So check us out on Facebook and Twitter!

Advertisements

Read All About It: Why Newspapers Need Marketing Analytics

October 26, 2010

After nearly 20 years, I decided to let my subscription to the Wall Street Journal lapse. A few months ago, I did likewise with my longtime subscription to the Chicago Tribune. I didn’t want to end my subscriptions, but as a customer, I felt my voice wasn’t being heard.

Some marketing research and predictive modeling might have enabled the Journal and the Tribune to keep me from defecting. From these efforts, both publications could have spotted my increasing frustration and dissatisfaction and intervened before I chose to vote with my feet.

Long story short, I let both subscriptions lapse for the same reason: chronic unreliable delivery, which was allowed to fester for many years despite numerous calls by me to their customer service numbers about missing and late deliveries.

Marketing Research

Both newspapers could have used marketing research to alert them to the likelihood that I would not renew my subscriptions. They each had lots of primary research readily available to them, without needing to do any surveys: my frequent calls to their customer service department, with the same complaint.

Imagine the wealth of insights both papers could have reaped from this data: they could determine the most common breaches of customer service; by looking at the number of times customers complained about the same issue, they could determine where problems were left unresolved; by breaking down the most frequent complaints by geography, they could determine whether additional delivery persons needed to be hired, or if more training was necessary; and most of all, both newspapers could have also found their most frequent complainers, and reached out to them to see what could be improved.

Both newspapers could have also conducted regular customer satisfaction surveys of their subscribers, asking about overall satisfaction and likelihood of renewing, followed by questions about subscribers’ perceptions about delivery service, quality of reporting, etc. The surveys could have helped the Journal and the Tribune grab the low-hanging fruit by identifying the key elements of service delivery that have the strongest impact on subscriber satisfaction and likelihood of renewal, and then coming up with a strategy to secure satisfaction with those elements.

Predictive Modeling

Another way both newspapers might have been able to intervene and retain my business would have been to predict my likelihood of lapse. This so-called attrition or “churn” modeling is common in industries whose customers are continuity-focused: newspapers and magazines, credit cards, membership associations, health clubs, banks, wireless communications, and broadband cable to name a few.

Attrition modeling (which, incidentally, will be discussed in the next two upcoming Forecast Friday posts) involves developing statistical models comparing attributes and characteristics of current customers with those of former, or churned, customers. The dependent variable being measured is whether a customer churned, so it would be a 1 if “yes” and a 0 if “no.”

Essentially, in building the model, the newspapers would look at several independent, or predictor, variables: customer demographics (e.g., age, income, gender, etc.), frequency of complaints, geography, to name a few. The model would then identify the variables that are the strongest predictors of whether a subscriber will not renew. The model will generate a score between 0 and 1, indicating each subscriber’s probability of not renewing. For example, a probability score of .72 indicates that there is a 72% chance a subscriber will let his/her subscription lapse, and that the newspaper may want to intervene.

In my case, both newspapers might have run such an attrition model to see if number of complaints in the last 12 months was a strong predictor of whether a subscriber would lapse. If that were the case, I would have a high probability of churn, and they could then call me; or, if they found that subscribers who churned were clustered in a particular area, they might be able to look for systemic breakdowns in customer service in that area. Either way, both papers could have found a way to salvage the subscriber relationship.


Why Surveys Go Well With Predictive Models

October 13, 2010

Thanks to advancements in technology, companies now have the capability to analyze millions – if not billions – of transactional, demographic, and psychographic records in a short time and develop sophisticated models that can assess several scenarios: how likely a customer is likely to purchase again; when he/she will purchase again; how much he/she will spend in the next year; how likely he/she will defect; and many more. Yet, by themselves, predictive models don’t provide a complete picture or profile of the customer. While models can provide information on a prospect or customer’s willingness and ability to purchase based on similar characteristics of current customers, they don’t provide much information about the customer or prospect’s readiness to buy. Hence, a survey can be a highly useful supplement.

Using a survey before a promotion – assuming no effort is made trying to sell to the customer under the guise of the survey – can provide valuable information. With a simple attitudinal and behavioral survey, a marketer can gain a read on the market’s readiness and willingness to buy at that moment. Moreover, the marketer can gauge the purchase readiness of certain customer groups and segments, so that he/she can structure marketing promotions in a manner that makes the best use of marketing dollars. In addition, if certain groups are wary of or unwilling to buy a product, the marketer can look for ways to reach out to these groups for the future.

Another benefit of surveys is to help classify customers and prospects into market segments based on their answers to carefully designed questions. Often, surveys can capture data about prospects and customers that transactional and third-party overlay data sources cannot.

Surprisingly, many companies either do marketing research or predictive modeling, but not both. This is squandering a great marketing opportunity. These two approaches together can provide the missing pieces to the puzzle that will help marketers improve their planning, increase their marketing ROI, and maximize their profits and market share.

Marketing Research in Practice

October 12, 2010

Most of the topics I have written about discuss the concepts of marketing research in theory. Today, I want to give you an overview of how marketing research works in practice. Marketing research from a practical standpoint should be discussed periodically because the realities of business are constantly changing and the ideal approach to research and the feasible approach can be very far apart.

Recently, I submitted a bid to a prospective client, who was looking to conduct a survey from a population that was difficult to reach. My bid came up higher than was expected. The department who was to execute the findings of this survey was on a tight budget. Yet, I had to explain the largest cost driver was hiring a marketing research firm to provide the sample. One faction within the company wanted to move ahead at the price I quoted; another wanted to look for ways to reduce the scope of the study and hence the cost. The tradeoff between cost and scope is often the first issue that emerges in the practice of marketing research.

Much of the practice of marketing research parallels what economists have long referred to as “the basic economic problem:” limited resources against unlimited wants. Thanks to the push for company departments to work cross-functionally, there have never been more stakeholders in the outcome of marketing research, with each function having its own agenda from the outcomes of the research. The scope of the study can expand greatly because of the many stakeholders involved; yet the time and money available for the study are often finite.

Another issue that comes up is the selection of the marketing research vendor. Ideally, a company should retain a vendor who is strong in the type of research methodology that needs to be done. In reality, however, this isn’t always possible. Many marketers don’t deal enough with marketing research vendors in order to know their areas of expertise; many believe that every vendor is the same. That’s hardly the case. Before I started Analysights, I worked for a membership association. The association had conducted an employee satisfaction survey and retained a firm that had conducted several. As part of the project, the employee research firm would compare the ratings to those of other companies’ employees who took a similar survey. However, most of the employers who called on this firm to conduct surveys were financial institutions – banks in particular – and their ratings were not comparable to those of the association. As a result, the peer comparison was useless.

Moreover, picking a vendor who is well-versed in a particular methodology may not be possible because they do it so well, that they charge a premium for the service. Hence, clients are often required to develop second-best solutions.

There are many other political issues that come up in the practice of marketing research, too numerous to list here. The key to remember is that marketing research provides information, and information provides power. The department with control of the information has great power in the organization, which results in less than ideal marketing research outcomes.

To ensure that your marketing research outcomes come as close to ideal, it is necessary to take a series of proactive steps. First, get all the stakeholders together. Without concern for money and time, the stakeholders as a group should determine the objectives of the study. Once the objectives are set, the group needs to think through the information they need for those objectives. Collectively, they should distinguish between the “need to know” and the “nice to know,” information and first go with the former. Generally, about 20% of the findings you generate will provide nearly 80% of the actionable information you need. It’s always best to start with a study design whose results provide the greatest amount of relevant, actionable information at the smallest scope possible.

Once the stakeholders are on board for the objectives and the information they must obtain for the objectives, then there should be some agreement on the tradeoffs between the cost of executing the research, the sophistication of the approach, and the data to be collected. Then timeframe and money should be considered. Once the tradeoffs have been agreed to, the study scope can be adjusted to meet the time allotted for the study and the budget.

Marketing research, in theory, focuses on the approaches and tools for doing marketing research. In practice, however, the marketing research encompasses much more: office politics and culture; time and budget constraints; dealing with organizational power and conflict; and identifying the appropriate political and resource balance for conducting the study.

Rankings – not Ratings – Matter in Customer Satisfaction Research

October 5, 2010

Companies spend countless dollars each year trying to measure and improve customer satisfaction. Much research has indicated that improved customer satisfaction brings about improved sales and share of wallet. Yet, the relationship is a weak one. Despite how satisfied customers say they are in customer satisfaction surveys, nearly 80% of their spending doesn’t relate to their stated satisfaction. Why is that?

In the Fall 2010 issue of Marketing Research, Jan Hofmeyr and Ged Parton of Synovate offer two reasons for this weak relationship between business results and satisfaction: companies don’t measure how their customers feel about competitors, nor do they recognize that they should be concerning themselves with the company’s rank, not its rating. For these reasons, the authors argue, models of what drives customer share of wallet offer little confidence.

Hofmeyr and Parton suggest some ways companies can make these improvements. Companies can start by getting ratings of the competition from the same respondent. If, for example, you are asking your customers to rate your company on a set of attributes that you believe are part of their customer satisfaction experience, if one customer gives a rating of “9” on a 10-point satisfaction scale, and another gives a rating of “8,” you are naturally inclined to treat the first customer as more likely to return and do business with you in the future. But that is only one piece of the puzzle, the authors say. What if you ask your customers to also rate your competition on those same attributes? What if the first customer assigns a competitor a “10” and the second customer a “7”? Basically what happens is that the first customer is very satisfied with your company, but even more satisfied with your competitor; the second customer may not be as satisfied with your company as the first customer, but he/she is most satisfied with your company over the competition. You’d probably want to spend more time with the one who gave the “8” rating.

In this example, the authors are essentially turning ratings into rankings. The ranking, not the rating, the author’s say, is the key to increased share of wallet. Hofmeyr and Parton’s research showed that if a customer shopped predominantly at two retailers, regardless of rating, as long as a customer rated one retailer higher than the other, then the top ranked retailer got an average of between 59% and 68% share of the customer’s wallet, while the lower ranked retailer got just 32% on average. If a customer shopped at three retailers, the pattern was similar: the top-ranked retailer got as much as a 58% share of the customer’s wallet; the second-place retailer, 25%, and the lowest ranked, 17%.

While it is important to have customers rate your company on satisfaction, it is just as important to have them rate your competition on the same evoked set and then order and rescale the ratings so that you can see where your company stands. By ranking your company with respect to your competition, you can much more easily determine gaps between satisfaction expectations and delivery so that you can increase share of wallet.