Archive for March, 2011

Forecast Friday Topic: Does Combining Forecasts Work?

March 31, 2011

(Forty-second in a series)

Last week, we discussed three approaches to combining forecasts: a simple average, assigning weights inversely proportional to sum of squared error, and regression-based weights. We combine forecasts in order to incorporate the best features of each forecasting method used and to minimize the errors of each. But does combining forecasts work in practice? The literature over the years suggests that it does. Newbold and Bos (1994) summarize the research on the combination of forecasts below:

  1. Regardless of the forecasts combined or individual forecasting methods used in the composite, the combined forecast performs quite well, and is often superior to the individual forecasts;
  2. The simple average approach to combining forecasts performs very well;
  3. The weights inversely proportional to SSE generally performs better than regression-based weights, unless there’s just a small number of forecasts to be combined and some forecasts are much superior to others. In situations like those, regression-based combining methods tend to work better than simple averages and weights inversely proportional to SSE, or the worst forecasts are excluded from the composite.

Why does the combination of forecasts work? Makridakis, Wheelwright, and Hyndman (1998) provide four reasons. Generally, many forecasts can’t measure the very thing they desire. For example, it’s very hard to measure demand for a product or service, so companies measure billings, orders, etc., as proxies for demand. Because the use of proxies can introduce bias in forecasts, the combination of forecasts can reduce the impact of these biases. Secondly, errors in forecasting are inevitable, and some forecasts have errors that are much greater than others. Combining the forecasts can smooth out the forecast error. Moreover, time series can have patterns or relationships that are unstable or frequently changing. By combining forecasts, we can reduce the errors brought on by random events in forecasting. Finally, most forecasting models minimize the forecast errors for one-period ahead. Forecasts are often necessary for several periods ahead; yet the further into the future we aim to predict, the less accurate our forecasts. Combining forecasts helps to minimize the error of forecasts several periods ahead.

Whenever and wherever possible, organizations should try to generate forecasts via many different approaches and then derive a composite forecast. Different approaches touch on different functions within the organization and increase the representativeness of the real world factors under which it operates. When those factors are accounted for in the composite forecast, accurate predictions frequently emerge.

Next Forecast Friday Topic: Evaluating Forecasts – Part I

Next week, we will begin the first of two-part discussion on the evaluation of forecasts. Once we generate forecasts, we must evaluate them periodically. Model performance degrades over time and we must see how our models are performing and tweak or alter them, or remodel all together.

********************************************************

Follow us on Facebook and Twitter!

For the latest insights on marketing research, predictive modeling, and forecasting, be sure to check out Analysights on Facebook and Twitter! “Like-ing” us on Facebook and following us on Twitter will allow you to stay informed of each new Insight Central post published, new information about analytics, discussions Analysights will be hosting, and other opportunities for feedback. So check us out on Facebook and Twitter!

Advertisements

Forecast Friday Topic: Procedures for Combining Forecasts

March 24, 2011

(Forty-first in a series)

We have gone through a series of different forecasting approaches over the last several months. Many times, companies will have multiple forecasts generated for the same item, usually generated by different people across the enterprise, often using different methodologies, assumptions, and data collection processes, and typically for different business problems. Rarely is one forecasting method or forecast superior to another, especially over time. Hence, many companies will opt to combine the forecasts they generate into a composite forecast.

Considerable empirical evidence suggests that combining forecasts works very well in practice. If all the forecasts generated by the alternative approaches are unbiased, then that lack of bias carries over into the composite forecast, a desirable outcome to have.

Two common procedures for combining forecasts include simple averaging and assigning weights inversely proportional to the sum of squares error. We will discuss both procedures in this post.

Simple Average

The quickest, easiest way to combine forecasts is to simply take the forecasts generated by each method and average them. With a simple average, each forecasting method is given equal weight. So, if you are presented with the following five forecasts:

You’ll get the average of $83,000 as your composite forecast.

The simplicity and quickness of this procedure is its main advantage. However, the chief drawback is if information is known that individual methods consistently predict superiorly or inferiorly, that information is disregarded in the combination. Moreover, look at the wide variation in the forecasts above. The forecasts range from $50,000 to $120,000. Clearly, one or more of these methods’ forecasts will be way off. While the combination of forecasts can dampen the impact of forecast error, the outliers can easily skew the composite forecast. If you suspect one or more forecasts may be inferior to the others, you may just choose to exclude them and apply simple averaging to the forecasts for which you have some reasonable degree of confidence.

Assigning Weights in (Inverse) Proportion to Sum of Squared Errors

If you know the past performance of individual forecasting methods available to you, and you need to combine multiple forecasts, it’s likely you will want to assign greater weights to those forecast methods that have performed best. You will also want to allow the weighting scheme to adapt over time, since the relative performance of forecasting methods can change. One way to do that would be to assign weights to each forecast in based on their inverse proportion to the sum of squared forecast errors.

Let’s assume you have 12 months of sales data, actual (Xt), and three forecasting methods, each generating a forecast for each month (f1t, f2t, and f2t). Each of those three methods have also generated forecasts for month 13, which you are trying to predict. The table below shows these 12 months of actuals and forecasts, along with each method’s forecasts for month 13:

How much weight do you give each forecast? Calculate the sum squared error for each:

To get the weight of the one forecast method, you need to divide the sum of the other two methods’ squared errors by the total sum of the squared errors for all three methods, and then divide by 2 (3 methods minus 1). You must then do the same for the other two methods, in order to get the weights to sum to 1. Hence, the weights are as follows:

 

Notice that the higher weights are given to the forecast methods with the lowest sum of squared error. So, since each method generated a forecast for month 13, our composite forecast would be:

Hence, we would estimate approximately 795 as our composite forecast for month 13. When we obtain month 13’s actual sales, we would repeat this process for sum of squared errors from months 1-13 for each individual forecast, reassign the weights, and then apply them to each method’s forecasts for month 14. Also, notice the fraction ½ at the beginning of each weight equation. The denominator depends on the number of weights we are generating. In this case, we are generating three weights, so our denominator is (3-1)=2. If we would have used four methods, each weight equation above would have been one-third; and if we had only two methods, there would be no fraction, because it would be one.

Regression-Based Weights – Another Procedure

Another way to assign weights would be with regression, but that’s beyond the scope of this post. While the weighting approach above is simple, it’s also ad hoc. Regression-based weights can be much more theoretically correct. However, in most cases, you will not have many months of forecasts for estimating regression parameters. Also, you run the risk of autocorrelated errors, most certainly for forecasts beyond one step ahead. More information on regression-based weights can be found in Newbold & Bos, Introductory Business & Economic Forecasting, Second Edition, pp. 504-508.

Next Forecast Friday Topic: Effectiveness of Combining Forecasts

Next week, we’ll take a look at the effectiveness of combining forecasts, with a look at the empirical evidence that has been accumulated.

 ********************************************************

Follow us on Facebook and Twitter!

For the latest insights on marketing research, predictive modeling, and forecasting, be sure to check out Analysights on Facebook and Twitter! “Like-ing” us on Facebook and following us on Twitter will allow you to stay informed of each new Insight Central post published, new information about analytics, discussions Analysights will be hosting, and other opportunities for feedback. So check us out on Facebook and Twitter!

Forecast Friday Topic: Judgmental Bias in Forecasting

March 17, 2011

(Fortieth in a series)

Over the last several weeks, we have discussed many of the qualitative forecasting methods, approaches that rely heavily on judgment and less on analytical tools. Because judgmental forecasting techniques rely upon a person’s thought processes and experiences, they can be highly subjected to bias. Today, we will complete our coverage of judgmental forecasting methods with a discussion of some of the common biases they inspire.

Inconsistency and Conservatism

Two very opposite biases in judgmental forecasting are inconsistency and conservatism. Inconsistency occurs when decision-makers apply different decision criteria in similar situations. Sometimes memories fade; other times, a manager or decision-maker may overestimate the impact of some new or extraneous event that is occurring in the subsequent situation that makes it different from the previous; he/she could be influenced by his/her mood that day; or he/she just wants to try something new out of boredom. Inconsistency can have serious negative repercussions.

One way to overcome inconsistency is to have a set of formal decision rules, or “expert systems,” that set objective criteria for decision-making, which must be applied to each similar forecasting situation. These criteria would be the factors to measure, the weight each one gets, and the objective of the forecasting project. When formal decision rules are imposed and applied consistently, forecasts tend to improve. However, it is important to monitor your environment as your expert systems are applied, so that they can be changed as your market evolves. Otherwise, failing to change a process in light of strong new information or evidence is a new bias, conservatism.

Now, have I just contradicted myself? No. Learning must always be applied in any expert system. We live in a dynamic world, not a static one. However, most change to our environment, and hence our expert systems, doesn’t occur dramatically or immediately. Often, they occur gradually and more subtly. It’s important to apply your expert systems and practice them for time, monitoring anything else in the environment, as well as the quality of forecasts your expert systems are measuring. If the gap between your forecast and actual performance is growing consistently, then it might be time to revisit your criteria. Perhaps you assigned too much or too little weight to one or more factors; perhaps new technologies are being introduced in your industry.

Decision-makers walk a fine line between inconsistency and conservatism in judgmental forecasts. Trying to reduce one bias may inspire another.

Recency

Often, when there are shocks in the economy, or disasters, these recent events tend to dominate our thoughts about the future. We tend to believe these conditions are permanent, so we downplay or ignore relevant events from the past. So, to avoid recency bias, we must remember that business cycles exist, and that ups and downs don’t last forever. Moreover, we should still keep expert systems in place that force us to consider all factors relevant in forecasting the event of interest.

Optimism

I’m guilty of this bias! Actually, many people are. Our projections are often clouded by the future outcomes we desire. Sometimes, we feel compelled to provide rosy projections because of pressure by higher-up executives. Unfortunately, optimism in forecasting can be very dangerous, and its repercussions severe when it is discovered how different our forecasted vs. actual results are. Many a company’s stock price has plunged because of overly optimistic forecasts. The best ways to avoid optimism are to have a disinterested third party generate the forecasts; or have other individuals make their own independent forecasts.

************

These are just a sample of the biases common in judgmental forecasting methods. And as you’ve probably guessed, deciding which biases you’re able to live with and which you are not able to live with is also a subjective decision! In general, for your judgmental forecasts to be accurate, you must consistently guard against biases and have set procedures in place for decision-making, that include learning as you go along.

*************************************

Next Forecast Friday Topic: Combining Forecasts

For the last 10 months, I have introduced you to the various ways by which forecasts are generated and the strengths and limitations of each approach. Organizations frequently generate multiple forecasts based on different approaches, decision criteria, and different assumptions. Finding a way to combine these forecasts into a representative composite forecast for the organization, as well as evaluating each forecast is crucial to the learning process and, ultimately, the success of the organization. So, beginning with next week’s Forecast Friday post, we begin our final Forecast Friday mini-series on combining and evaluating forecasts.

Insight Central Will Resume Week of March 14

March 9, 2011

I’m currently on assignment and unable to post this week. Insight Central, including this week’s scheduled Forecast Friday post on “Judgmental Biases in Forecasting” will resume next week.

Thanks for understanding.

Alex

Forecast Friday Topic: Other Judgmental Forecasting Methods

March 3, 2011

(Thirty-ninth in a series)

Over the last several weeks, we discussed a series of different non-quantitative forecasting methods: Delphi Method, Jury of Executive Opinion, Sales Force Composite Forecasts, and Surveys of Expectations. In today’s brief post, we’ll finish with a brief discussion of three more judgmental forecasting methods: Scenario Writing, La Prospective, and Cross-Impact Analysis.

Scenario Writing

When a company’s or industry’s long-term future is far too difficult to predict (whose isn’t!), it is common for experts in that company or industry to ponder over possible situations in which the company or industry may find itself in the distant future. The documentation of these situations – scenarios – is known as scenario writing. Scenario writing seeks to get managers thinking in terms of possible outcomes at a future time where quantitative forecasting methods may be inadequate for forecasting. Unfortunately, much literature on this approach suggests that writing multiple scenarios does not have much better quality over any of the other judgmental forecasting methods we’ve discussed to date.

La Prospective

Developed in France, La Prospective eschews quantitative models and emphasizes several potential futures that may result from the activities of individuals. Interaction among several events, many of which can be, and indeed are, dynamic in structure and constantly evolving, are studied and their impacts are cross-analyzed, and their effect on the future is assessed. La Prospective devotes considerable attention to the power, strategies, and resources of the individual “agents” whose actions will influence the future. Because the different components being analyzed can be dynamic, the forecasting process for La Prospective is often not linear; stages can progress in different or simultaneous order. And the company doing the forecasting may also be one of the influential agents involved. This helps companies assess the value of any actions the company might take. After the La Prospective process is complete, scenarios of the future are written, from which the company can formulate strategies.

Cross-Impact Analysis

Cross-Impact analysis seeks to account for the interdependence of uncertain future events. Quite often, a future event occurring can be caused or determined by the occurrence of another event. And often, an analyst may have strong knowledge of one event, and little or no knowledge about the others. For example, in trying to predict the future price of tissue, experts at companies like Kimberly-Clark, along with resource economists, forest experts, and conservationists may all have useful views. If a country that has vast acreages of timber imposes more stringent regulations on the cutting down of trees, that can result in sharp increases in the price of tissue. Moreover, if there is a major increase, or even a sharp reduction, in the incidence of influenza or of the common cold – the realm of epidemiologists – that too can influence the price of tissue. And even the current tensions in the Middle East – the realm of foreign policy experts – can affect the price of tissue. If tensions in the Middle East exacerbate, the price of oil shoots up, driving up the price of the energy required to convert the timber into paper, and also the price of gas to transport the timber to the paper mill and the tissue to the wholesalers and to the retailer. Cross-impact analysis measures the likelihood that each of these events will occur and attempts to assess the impact they will have on the future of the event of interest.

Next Forecast Friday Topic: Judgmental Bias in Forecasting

Now that we have discussed several of the judgmental forecasting techniques available to analysts, it is obvious that, unlike quantitative methods, these techniques are not objective. Because, as their name implies, judgmental forecasting methods are based on judgment, they are highly susceptible to biases. Next week’s Forecast Friday post will discuss some of the biases that can result from judgmental forecasting methods.