Archive for the ‘Judgmental Forecasts’ Category

Forecast Friday Topic: Judgmental Bias in Forecasting

March 17, 2011

(Fortieth in a series)

Over the last several weeks, we have discussed many of the qualitative forecasting methods, approaches that rely heavily on judgment and less on analytical tools. Because judgmental forecasting techniques rely upon a person’s thought processes and experiences, they can be highly subjected to bias. Today, we will complete our coverage of judgmental forecasting methods with a discussion of some of the common biases they inspire.

Inconsistency and Conservatism

Two very opposite biases in judgmental forecasting are inconsistency and conservatism. Inconsistency occurs when decision-makers apply different decision criteria in similar situations. Sometimes memories fade; other times, a manager or decision-maker may overestimate the impact of some new or extraneous event that is occurring in the subsequent situation that makes it different from the previous; he/she could be influenced by his/her mood that day; or he/she just wants to try something new out of boredom. Inconsistency can have serious negative repercussions.

One way to overcome inconsistency is to have a set of formal decision rules, or “expert systems,” that set objective criteria for decision-making, which must be applied to each similar forecasting situation. These criteria would be the factors to measure, the weight each one gets, and the objective of the forecasting project. When formal decision rules are imposed and applied consistently, forecasts tend to improve. However, it is important to monitor your environment as your expert systems are applied, so that they can be changed as your market evolves. Otherwise, failing to change a process in light of strong new information or evidence is a new bias, conservatism.

Now, have I just contradicted myself? No. Learning must always be applied in any expert system. We live in a dynamic world, not a static one. However, most change to our environment, and hence our expert systems, doesn’t occur dramatically or immediately. Often, they occur gradually and more subtly. It’s important to apply your expert systems and practice them for time, monitoring anything else in the environment, as well as the quality of forecasts your expert systems are measuring. If the gap between your forecast and actual performance is growing consistently, then it might be time to revisit your criteria. Perhaps you assigned too much or too little weight to one or more factors; perhaps new technologies are being introduced in your industry.

Decision-makers walk a fine line between inconsistency and conservatism in judgmental forecasts. Trying to reduce one bias may inspire another.


Often, when there are shocks in the economy, or disasters, these recent events tend to dominate our thoughts about the future. We tend to believe these conditions are permanent, so we downplay or ignore relevant events from the past. So, to avoid recency bias, we must remember that business cycles exist, and that ups and downs don’t last forever. Moreover, we should still keep expert systems in place that force us to consider all factors relevant in forecasting the event of interest.


I’m guilty of this bias! Actually, many people are. Our projections are often clouded by the future outcomes we desire. Sometimes, we feel compelled to provide rosy projections because of pressure by higher-up executives. Unfortunately, optimism in forecasting can be very dangerous, and its repercussions severe when it is discovered how different our forecasted vs. actual results are. Many a company’s stock price has plunged because of overly optimistic forecasts. The best ways to avoid optimism are to have a disinterested third party generate the forecasts; or have other individuals make their own independent forecasts.


These are just a sample of the biases common in judgmental forecasting methods. And as you’ve probably guessed, deciding which biases you’re able to live with and which you are not able to live with is also a subjective decision! In general, for your judgmental forecasts to be accurate, you must consistently guard against biases and have set procedures in place for decision-making, that include learning as you go along.


Next Forecast Friday Topic: Combining Forecasts

For the last 10 months, I have introduced you to the various ways by which forecasts are generated and the strengths and limitations of each approach. Organizations frequently generate multiple forecasts based on different approaches, decision criteria, and different assumptions. Finding a way to combine these forecasts into a representative composite forecast for the organization, as well as evaluating each forecast is crucial to the learning process and, ultimately, the success of the organization. So, beginning with next week’s Forecast Friday post, we begin our final Forecast Friday mini-series on combining and evaluating forecasts.

Forecast Friday Topic: Other Judgmental Forecasting Methods

March 3, 2011

(Thirty-ninth in a series)

Over the last several weeks, we discussed a series of different non-quantitative forecasting methods: Delphi Method, Jury of Executive Opinion, Sales Force Composite Forecasts, and Surveys of Expectations. In today’s brief post, we’ll finish with a brief discussion of three more judgmental forecasting methods: Scenario Writing, La Prospective, and Cross-Impact Analysis.

Scenario Writing

When a company’s or industry’s long-term future is far too difficult to predict (whose isn’t!), it is common for experts in that company or industry to ponder over possible situations in which the company or industry may find itself in the distant future. The documentation of these situations – scenarios – is known as scenario writing. Scenario writing seeks to get managers thinking in terms of possible outcomes at a future time where quantitative forecasting methods may be inadequate for forecasting. Unfortunately, much literature on this approach suggests that writing multiple scenarios does not have much better quality over any of the other judgmental forecasting methods we’ve discussed to date.

La Prospective

Developed in France, La Prospective eschews quantitative models and emphasizes several potential futures that may result from the activities of individuals. Interaction among several events, many of which can be, and indeed are, dynamic in structure and constantly evolving, are studied and their impacts are cross-analyzed, and their effect on the future is assessed. La Prospective devotes considerable attention to the power, strategies, and resources of the individual “agents” whose actions will influence the future. Because the different components being analyzed can be dynamic, the forecasting process for La Prospective is often not linear; stages can progress in different or simultaneous order. And the company doing the forecasting may also be one of the influential agents involved. This helps companies assess the value of any actions the company might take. After the La Prospective process is complete, scenarios of the future are written, from which the company can formulate strategies.

Cross-Impact Analysis

Cross-Impact analysis seeks to account for the interdependence of uncertain future events. Quite often, a future event occurring can be caused or determined by the occurrence of another event. And often, an analyst may have strong knowledge of one event, and little or no knowledge about the others. For example, in trying to predict the future price of tissue, experts at companies like Kimberly-Clark, along with resource economists, forest experts, and conservationists may all have useful views. If a country that has vast acreages of timber imposes more stringent regulations on the cutting down of trees, that can result in sharp increases in the price of tissue. Moreover, if there is a major increase, or even a sharp reduction, in the incidence of influenza or of the common cold – the realm of epidemiologists – that too can influence the price of tissue. And even the current tensions in the Middle East – the realm of foreign policy experts – can affect the price of tissue. If tensions in the Middle East exacerbate, the price of oil shoots up, driving up the price of the energy required to convert the timber into paper, and also the price of gas to transport the timber to the paper mill and the tissue to the wholesalers and to the retailer. Cross-impact analysis measures the likelihood that each of these events will occur and attempts to assess the impact they will have on the future of the event of interest.

Next Forecast Friday Topic: Judgmental Bias in Forecasting

Now that we have discussed several of the judgmental forecasting techniques available to analysts, it is obvious that, unlike quantitative methods, these techniques are not objective. Because, as their name implies, judgmental forecasting methods are based on judgment, they are highly susceptible to biases. Next week’s Forecast Friday post will discuss some of the biases that can result from judgmental forecasting methods.

Forecast Friday Topic: The Delphi Method

February 17, 2011

(Thirty-eighth in a series)

Last week we discussed the role of expert judgment in making forecasts. When quantitative data are not available, or when we are trying to predict a major structural shift in the future, we often rely on those people who are well-versed and knowledgeable in the field for which we seek forecasts. The Delphi Method is one way to do this.

Developed at the start of the Cold War by the Rand Corporation, the Delphi Method has its groundings in technological forecasting, as it was designed to forecast the impact of technology on warfare. The name “Delphi” comes from the Oracle of Delphi, which in Greek Mythology foretold the future. Quantitative models are often of limited use when trying to predict far into the future. Environmental patterns, largely driven by technology changes, can be altered dramatically over long periods of time. When projecting far into the future, we want to know how probable, frequent, or intense these future forecasts are or will be. This is where Delphi comes in.

The Delphi Method is a structured, interactive, iterative communication technique joining together experts to share their opinions on the future. Unlike the Jury of Executive Opinion, which we discussed in last week’s post, this panel of experts does not meet face-to-face. This ensures that experts’ opinions are not influenced by those of other panel members. The number of experts on the panel is large, and many of them may differ greatly in their areas of expertise.

Panel members are given questionnaires and asking them a series of “what,” “if,” “what if,” or “when” questions about the future. They may even be presented with scenarios and asked to predict the probability of such a scenario occurring and when it may occur. Differences in experiences, information availability, and interpretation methods between panel members will ensure a wide diversity of views. In order to move panelists to consensus, their opinions are summarized and shared (anonymously) with the other panel members, and the panelists are encouraged to adjust their predictions based on these viewpoints. When certain panel members hold views substantially different from the group median, they are asked to provide written justification, so that the strength of their opinions can be determined. After a few iterations, the group tends to move toward a consensus forecast.

The Delphi Method is not without its drawbacks. While the absence of face-to-face meetings eliminates biased viewpoints brought on by authority, seniority, and articulation, it also greatly reduces – if not also eliminates – immediate access to the knowledge of others. Hence, panelists provide their views in isolation and, based on their experiences, may not consider certain facts in their assessments. Moreover, Delphi techniques can be expensive and time consuming, as experts’ time is at a premium, and searching for them can be intense. In addition, because the Delphi Method is used to predict several years into the future, a lot of time must be allowed to elapse before one can determine whether the method was appropriate for the task on which it was used. Finally, just because the iterative process moves experts towards a group median, it’s less clear that the process pulls the group towards the true future outcome.

Next Forecast Friday Topic: Other Judgmental Forecasting Methods

In next week’s Forecast Friday post, we will be discussing a few other judgmental forecasting approaches that are used when quantitative data is not available. The week after that, we will discuss the various judgmental biases that exist in forecasting. These next two posts will round out our discussions judgmental methods, after which we will move into our final segment of the series “Combining and Evaluating Forecasts.”


Follow us on Facebook and Twitter!

For the latest insights on marketing research, predictive modeling, and forecasting, be sure to check out Analysights on Facebook and Twitter! “Like-ing” us on Facebook and following us on Twitter will allow you to stay informed of each new Insight Central post published, new information about analytics, discussions Analysights will be hosting, and other opportunities for feedback. So check us out on Facebook and Twitter!

Forecast Friday Topic: Expert Judgment

February 10, 2011

(Thirty-seventh in a series)

Last week, we began our discussion of judgmental forecasting methods, talking about judgmental extrapolation, which required no real understanding of the physical process behind the time series. Today, we will talk about more sophisticated judgmental techniques that are used in subjective forecasting; “sophisticated” only in the sense that the opinion of “experts” is used in trying to predict the future. The three techniques we will discuss are the Jury of Executive Opinion, sales force composite forecasts, and surveys of expectations.

Jury of Executive Opinion

The Jury of Executive Opinion is quite often seen in an organization’s budgeting and strategic planning process. The “jury” is often a group of high-level executives from all areas of the organization – marketing, finance, human resources, manufacturing, etc. – who come together to discuss their respective areas of business and work to come up with a composite forecast of where the organization’s business will be. Each executive shares his/her opinions and weighs and evaluates those of the other executive. After discussion, the executives write down their forecasts, which are then averaged.

One example of the Jury of Executive Opinion takes me back to 1999-2000, when I worked for catalog retailer Hammacher Schlemmer. Hammacher Schlemmer convened a weekly committee to estimate the orders coming in for the next two weeks from each of the active catalogs in circulation. The committee was made up of several marketing personnel, including myself (as I was the forecasting analyst!), and managers from the warehouse, in-bound call center, inventory control, and merchandising. We would begin every Wednesday morning reviewing the number of orders that came in for each active catalog, for the prior week and the first two days of the current week. Armed with that order information, and spreadsheets detailing order history for those catalogs’ prior years, each of us would indicate our orders forecasts for the next several weeks ahead. Our forecasts were then averaged, and we would then submit the composite forecasts to the warehouse and call center to assist with their staffing, and to inventory control to ensure adequate purchasing.

One of the nice things about the Jury of Executive Opinion is its simplicity. Getting executives to sound off is often pretty easy to do. Moreover, incorporating the experiences of a broad group into the forecasting process may enable companies to see the forest beyond the trees.

However, simple and broad-focused as it may be, the Jury of Executive Opinion is not without its flaws. These meetings can be time consuming, for one. Indeed, at Hammacher Schlemmer, during the last three months of the year – when the holiday season was in tow – those weekly meetings could take all morning, as nearly a dozen catalogs could be in circulation. Furthermore, group dynamics may actually lead to unwise consensus forecasts. The group is often at risk of being swayed by the opinions of those members who are most articulate, or with greater seniority or rank within the organization, or just by their own over-optimism. Another problem is that the passage of time makes it difficult to recognize those experts whose opinions were most reliable and whose should be given less weight. As a result, there’s no way to hold any individual member accountable for a forecast. Finally, executives are more comfortable with using their opinions for mid-and longer-range planning than for shorter period-to-period predictions, especially since recent unexpected events can also influence their opinion.

Sales Force Composite Forecasts

When companies have a product that is sold by sales agents in specific territories, it is not uncommon for them to seek the opinions of their sales representatives or branch/territory managers in developing forecasts for each product line. In fact, sales representatives’ opinions can be quite useful, since they are generally close to the customer, and may be able to provide useful insights into purchase intent. Essentially, these companies have their agents develop forecasts for each of the products they sell within a territory. The added benefit of this approach is that a company can develop a forecast for the entire market, as well as for individual territories.

Indeed, when I worked in the market research department of insurance company Bankers Life & Casualty during 1997-1999, we frequently conducted surveys of our sales force and branch managers to understand how many long-term care insurance policies, Medicare Supplement policies, and annuities were being sold within each market, and how much were being lost to the competition. These surveys would provide a read into the market size for each insurance product at both a regional and national level.

While the closeness to the customer is a great advantage of sales force composite surveys, they too have problems. Sales agents have a tendency to be overly optimistic in their forecasts and may set unrealistic goals. In addition, because sales agents are close to the customer, their opinions are likely to be swayed by microeconomic decision purchases, when in fact aggregate sales are often driven by macroeconomic factors. Supplementing sales force composite forecasts with more formal quantitative forecasting methods, if possible, is often recommended.

Surveys of Expectations

We actually covered surveys of expectations in our December 9, 2010 Forecast Friday post, but let me just quickly go through it. Sometimes when data isn’t available for forecasting, companies can conduct surveys to get opinions and expectations. Marketing research in this fashion is often expensive, so often surveys of expectations are used when it is believed they will provide valuable information. Surveys work well for new product development, brand awareness, and market penetration. In our December 9, 2010 Forecast Friday topic, the audience of the expectation survey was mostly executives and other business experts. In this post, the audience is consumers.

NCH Marketing Services, both the leading processor of grocery coupons and a leading coupon promotion firm – and also a former employer of mine – used surveys to obtain information on coupon usage. The company even asked persons how many coupons they took to the store in a typical month. From there, the company would estimate the number of coupons redeemed in the U.S. annually.


Companies often must rely solely on expert judgment for looking ahead. The Jury of Executive Opinion, sales force composite forecasts, and consumer surveys are just some of the approaches companies can take to predict the future when more formal quantitative methods are either unavailable or unreliable.

Next Forecast Friday Topic: The Delphi Method


Follow us on Facebook and Twitter!

For the latest insights on marketing research, predictive modeling, and forecasting, be sure to check out Analysights on Facebook and Twitter! “Like-ing” us on Facebook and following us on Twitter will allow you to stay informed of each new Insight Central post published, new information about analytics, discussions Analysights will be hosting, and other opportunities for feedback. So check us out on Facebook and Twitter!

Forecast Friday Topic: Judgmental Extrapolation

February 3, 2011

(Thirty-sixth in a series)

The forecasting methods we have discussed since the start of the Forecast Friday series have been quantitative. Formal quantitative models are often quite useful for predicting the near future, as the recent past often indicates expected results for the future. However, things change over time. While predictive models might be useful in forecasting the number of visits to your Web site next month, they may be less relevant to predicting your company’s social media patterns five or 10 years from now. Technology is likely to change dramatically during that time. Hence, more qualitative, or judgmental, forecasts are often required. Thus begins the next section of our series: Judgmental Methods in Forecasting.

Yet even with short-run forecasting, human judgment should be a part of the forecasts. A time series model can’t explain why a pattern is happening; it can only make predictions based on the patterns in the series it has “learned.” It cannot take into account the current environment in which those numbers came about, or information some experts in the field have about events likely to occur. Hence, forecasts by models should never be the “be-all, end-all.”

Essentially, there are two types of judgmental forecasting: subject matter expertise, which we will discuss in next week’s post, and judgmental extrapolation, which is today’s topic. Judgmental extrapolation – also known as bold freehand extrapolation is the crudest form of judgmental forecasting, and there’s really no expertise required to do it. Judgmental extrapolation is simply looking at the graph of a time series and making projections based upon visual inspection. That’s all there is to it; no understanding of the physical process behind the time series is required.

The advantage of judgmental extrapolation (the only one I could find, anyway) is its efficiency: it doesn’t require a lot of time, effort, understanding of the series, or money. But that’s efficiency, not accuracy! Sometimes when time and money are short, judgmental extrapolation is sometimes the only way to go. But if you have a time series already, you might get better results just plugging them into Excel and using its exponential smoothing or regression tools – and even that is relatively time and cost efficient.

Unfortunately, there’s no definitive findings from the published literature on the accuracy of judgmental extrapolation. I tend to be among its skeptics. Perhaps the strongest finding I’ve seen for the accuracy of judgmental forecasts (and it’s not really an argument in favor!) is that, when shown graphs of forecasts, individuals can adjust them in ways that improve the forecasts, but only if the forecasts themselves are far from optimal! That was the finding of T. R. Willemain, in a 1991 article in the International Journal of Forecasting.

So why do I mention judgmental extrapolation? As I said before, sometimes you need to make decisions quickly and without resources or adequate information. What’s more, judgmental extrapolation’s value – though not proven – has also not been disproven. Until its value is disproven, judgmental extrapolation should be considered another tool in the forecasting arsenal.

Next Forecast Friday Topic: Expert Judgment

Today we talked about forecasts relying upon non-expert judgment. Next week, we’ll talk about judgmental forecasts that are based on the opinion of subject matter experts.


Follow us on Facebook and Twitter!

For the latest insights on marketing research, predictive modeling, and forecasting, be sure to check out Analysights on Facebook and Twitter! “Like-ing” us on Facebook and following us on Twitter will allow you to stay informed of each new Insight Central post published, new information about analytics, discussions Analysights will be hosting, and other opportunities for feedback. So check us out on Facebook and Twitter!