Archive for February, 2011

Forecast Friday will Resume Next Thursday, March 3.

February 23, 2011

Sorry for the inconvenience. I’ve been on assignment. Forecast Friday will return next Thursday.


Forecast Friday Topic: The Delphi Method

February 17, 2011

(Thirty-eighth in a series)

Last week we discussed the role of expert judgment in making forecasts. When quantitative data are not available, or when we are trying to predict a major structural shift in the future, we often rely on those people who are well-versed and knowledgeable in the field for which we seek forecasts. The Delphi Method is one way to do this.

Developed at the start of the Cold War by the Rand Corporation, the Delphi Method has its groundings in technological forecasting, as it was designed to forecast the impact of technology on warfare. The name “Delphi” comes from the Oracle of Delphi, which in Greek Mythology foretold the future. Quantitative models are often of limited use when trying to predict far into the future. Environmental patterns, largely driven by technology changes, can be altered dramatically over long periods of time. When projecting far into the future, we want to know how probable, frequent, or intense these future forecasts are or will be. This is where Delphi comes in.

The Delphi Method is a structured, interactive, iterative communication technique joining together experts to share their opinions on the future. Unlike the Jury of Executive Opinion, which we discussed in last week’s post, this panel of experts does not meet face-to-face. This ensures that experts’ opinions are not influenced by those of other panel members. The number of experts on the panel is large, and many of them may differ greatly in their areas of expertise.

Panel members are given questionnaires and asking them a series of “what,” “if,” “what if,” or “when” questions about the future. They may even be presented with scenarios and asked to predict the probability of such a scenario occurring and when it may occur. Differences in experiences, information availability, and interpretation methods between panel members will ensure a wide diversity of views. In order to move panelists to consensus, their opinions are summarized and shared (anonymously) with the other panel members, and the panelists are encouraged to adjust their predictions based on these viewpoints. When certain panel members hold views substantially different from the group median, they are asked to provide written justification, so that the strength of their opinions can be determined. After a few iterations, the group tends to move toward a consensus forecast.

The Delphi Method is not without its drawbacks. While the absence of face-to-face meetings eliminates biased viewpoints brought on by authority, seniority, and articulation, it also greatly reduces – if not also eliminates – immediate access to the knowledge of others. Hence, panelists provide their views in isolation and, based on their experiences, may not consider certain facts in their assessments. Moreover, Delphi techniques can be expensive and time consuming, as experts’ time is at a premium, and searching for them can be intense. In addition, because the Delphi Method is used to predict several years into the future, a lot of time must be allowed to elapse before one can determine whether the method was appropriate for the task on which it was used. Finally, just because the iterative process moves experts towards a group median, it’s less clear that the process pulls the group towards the true future outcome.

Next Forecast Friday Topic: Other Judgmental Forecasting Methods

In next week’s Forecast Friday post, we will be discussing a few other judgmental forecasting approaches that are used when quantitative data is not available. The week after that, we will discuss the various judgmental biases that exist in forecasting. These next two posts will round out our discussions judgmental methods, after which we will move into our final segment of the series “Combining and Evaluating Forecasts.”


Follow us on Facebook and Twitter!

For the latest insights on marketing research, predictive modeling, and forecasting, be sure to check out Analysights on Facebook and Twitter! “Like-ing” us on Facebook and following us on Twitter will allow you to stay informed of each new Insight Central post published, new information about analytics, discussions Analysights will be hosting, and other opportunities for feedback. So check us out on Facebook and Twitter!

Forecast Friday Topic: Expert Judgment

February 10, 2011

(Thirty-seventh in a series)

Last week, we began our discussion of judgmental forecasting methods, talking about judgmental extrapolation, which required no real understanding of the physical process behind the time series. Today, we will talk about more sophisticated judgmental techniques that are used in subjective forecasting; “sophisticated” only in the sense that the opinion of “experts” is used in trying to predict the future. The three techniques we will discuss are the Jury of Executive Opinion, sales force composite forecasts, and surveys of expectations.

Jury of Executive Opinion

The Jury of Executive Opinion is quite often seen in an organization’s budgeting and strategic planning process. The “jury” is often a group of high-level executives from all areas of the organization – marketing, finance, human resources, manufacturing, etc. – who come together to discuss their respective areas of business and work to come up with a composite forecast of where the organization’s business will be. Each executive shares his/her opinions and weighs and evaluates those of the other executive. After discussion, the executives write down their forecasts, which are then averaged.

One example of the Jury of Executive Opinion takes me back to 1999-2000, when I worked for catalog retailer Hammacher Schlemmer. Hammacher Schlemmer convened a weekly committee to estimate the orders coming in for the next two weeks from each of the active catalogs in circulation. The committee was made up of several marketing personnel, including myself (as I was the forecasting analyst!), and managers from the warehouse, in-bound call center, inventory control, and merchandising. We would begin every Wednesday morning reviewing the number of orders that came in for each active catalog, for the prior week and the first two days of the current week. Armed with that order information, and spreadsheets detailing order history for those catalogs’ prior years, each of us would indicate our orders forecasts for the next several weeks ahead. Our forecasts were then averaged, and we would then submit the composite forecasts to the warehouse and call center to assist with their staffing, and to inventory control to ensure adequate purchasing.

One of the nice things about the Jury of Executive Opinion is its simplicity. Getting executives to sound off is often pretty easy to do. Moreover, incorporating the experiences of a broad group into the forecasting process may enable companies to see the forest beyond the trees.

However, simple and broad-focused as it may be, the Jury of Executive Opinion is not without its flaws. These meetings can be time consuming, for one. Indeed, at Hammacher Schlemmer, during the last three months of the year – when the holiday season was in tow – those weekly meetings could take all morning, as nearly a dozen catalogs could be in circulation. Furthermore, group dynamics may actually lead to unwise consensus forecasts. The group is often at risk of being swayed by the opinions of those members who are most articulate, or with greater seniority or rank within the organization, or just by their own over-optimism. Another problem is that the passage of time makes it difficult to recognize those experts whose opinions were most reliable and whose should be given less weight. As a result, there’s no way to hold any individual member accountable for a forecast. Finally, executives are more comfortable with using their opinions for mid-and longer-range planning than for shorter period-to-period predictions, especially since recent unexpected events can also influence their opinion.

Sales Force Composite Forecasts

When companies have a product that is sold by sales agents in specific territories, it is not uncommon for them to seek the opinions of their sales representatives or branch/territory managers in developing forecasts for each product line. In fact, sales representatives’ opinions can be quite useful, since they are generally close to the customer, and may be able to provide useful insights into purchase intent. Essentially, these companies have their agents develop forecasts for each of the products they sell within a territory. The added benefit of this approach is that a company can develop a forecast for the entire market, as well as for individual territories.

Indeed, when I worked in the market research department of insurance company Bankers Life & Casualty during 1997-1999, we frequently conducted surveys of our sales force and branch managers to understand how many long-term care insurance policies, Medicare Supplement policies, and annuities were being sold within each market, and how much were being lost to the competition. These surveys would provide a read into the market size for each insurance product at both a regional and national level.

While the closeness to the customer is a great advantage of sales force composite surveys, they too have problems. Sales agents have a tendency to be overly optimistic in their forecasts and may set unrealistic goals. In addition, because sales agents are close to the customer, their opinions are likely to be swayed by microeconomic decision purchases, when in fact aggregate sales are often driven by macroeconomic factors. Supplementing sales force composite forecasts with more formal quantitative forecasting methods, if possible, is often recommended.

Surveys of Expectations

We actually covered surveys of expectations in our December 9, 2010 Forecast Friday post, but let me just quickly go through it. Sometimes when data isn’t available for forecasting, companies can conduct surveys to get opinions and expectations. Marketing research in this fashion is often expensive, so often surveys of expectations are used when it is believed they will provide valuable information. Surveys work well for new product development, brand awareness, and market penetration. In our December 9, 2010 Forecast Friday topic, the audience of the expectation survey was mostly executives and other business experts. In this post, the audience is consumers.

NCH Marketing Services, both the leading processor of grocery coupons and a leading coupon promotion firm – and also a former employer of mine – used surveys to obtain information on coupon usage. The company even asked persons how many coupons they took to the store in a typical month. From there, the company would estimate the number of coupons redeemed in the U.S. annually.


Companies often must rely solely on expert judgment for looking ahead. The Jury of Executive Opinion, sales force composite forecasts, and consumer surveys are just some of the approaches companies can take to predict the future when more formal quantitative methods are either unavailable or unreliable.

Next Forecast Friday Topic: The Delphi Method


Follow us on Facebook and Twitter!

For the latest insights on marketing research, predictive modeling, and forecasting, be sure to check out Analysights on Facebook and Twitter! “Like-ing” us on Facebook and following us on Twitter will allow you to stay informed of each new Insight Central post published, new information about analytics, discussions Analysights will be hosting, and other opportunities for feedback. So check us out on Facebook and Twitter!

Forecast Friday Topic: Judgmental Extrapolation

February 3, 2011

(Thirty-sixth in a series)

The forecasting methods we have discussed since the start of the Forecast Friday series have been quantitative. Formal quantitative models are often quite useful for predicting the near future, as the recent past often indicates expected results for the future. However, things change over time. While predictive models might be useful in forecasting the number of visits to your Web site next month, they may be less relevant to predicting your company’s social media patterns five or 10 years from now. Technology is likely to change dramatically during that time. Hence, more qualitative, or judgmental, forecasts are often required. Thus begins the next section of our series: Judgmental Methods in Forecasting.

Yet even with short-run forecasting, human judgment should be a part of the forecasts. A time series model can’t explain why a pattern is happening; it can only make predictions based on the patterns in the series it has “learned.” It cannot take into account the current environment in which those numbers came about, or information some experts in the field have about events likely to occur. Hence, forecasts by models should never be the “be-all, end-all.”

Essentially, there are two types of judgmental forecasting: subject matter expertise, which we will discuss in next week’s post, and judgmental extrapolation, which is today’s topic. Judgmental extrapolation – also known as bold freehand extrapolation is the crudest form of judgmental forecasting, and there’s really no expertise required to do it. Judgmental extrapolation is simply looking at the graph of a time series and making projections based upon visual inspection. That’s all there is to it; no understanding of the physical process behind the time series is required.

The advantage of judgmental extrapolation (the only one I could find, anyway) is its efficiency: it doesn’t require a lot of time, effort, understanding of the series, or money. But that’s efficiency, not accuracy! Sometimes when time and money are short, judgmental extrapolation is sometimes the only way to go. But if you have a time series already, you might get better results just plugging them into Excel and using its exponential smoothing or regression tools – and even that is relatively time and cost efficient.

Unfortunately, there’s no definitive findings from the published literature on the accuracy of judgmental extrapolation. I tend to be among its skeptics. Perhaps the strongest finding I’ve seen for the accuracy of judgmental forecasts (and it’s not really an argument in favor!) is that, when shown graphs of forecasts, individuals can adjust them in ways that improve the forecasts, but only if the forecasts themselves are far from optimal! That was the finding of T. R. Willemain, in a 1991 article in the International Journal of Forecasting.

So why do I mention judgmental extrapolation? As I said before, sometimes you need to make decisions quickly and without resources or adequate information. What’s more, judgmental extrapolation’s value – though not proven – has also not been disproven. Until its value is disproven, judgmental extrapolation should be considered another tool in the forecasting arsenal.

Next Forecast Friday Topic: Expert Judgment

Today we talked about forecasts relying upon non-expert judgment. Next week, we’ll talk about judgmental forecasts that are based on the opinion of subject matter experts.


Follow us on Facebook and Twitter!

For the latest insights on marketing research, predictive modeling, and forecasting, be sure to check out Analysights on Facebook and Twitter! “Like-ing” us on Facebook and following us on Twitter will allow you to stay informed of each new Insight Central post published, new information about analytics, discussions Analysights will be hosting, and other opportunities for feedback. So check us out on Facebook and Twitter!