Archive for the ‘Forecast Fridays’ Category

Forecast Friday Topic: Evaluation of Forecasts

April 14, 2011

(Last in the series)

We have finally come to the end of our almost year-long Forecast Friday journey. During this period, we have discussed various forecasting methods, including regression analysis, exponential smoothing, moving average methods, the basics of both ARIMA and logistic regression models. We also discussed qualitative, or judgmental, forecasting methods; we discussed how to diagnose your regression models for violations such as multicollinearity, autocorrelation, heteroscedasticity, and specification bias; and we discussed a series of other topics in forecasting, like the identification problem, leading economic indicators, calendar effects in forecasting, and the combination of forecasts. Now, we move on to the last part of the forecasting process: evaluating forecasts.

How well does your forecast model perform? That question should be the crux of your evaluation. This criterion relates to your company’s bottom line. You need to consider the costs to your company of forecasting too high and of forecasting too low. If you own a toy store and your sales forecasts for some stock-keeping units (SKUs) is too high, you risk marking down those items on clearance. On the other hand, if your forecast is too low, you risk running out of stock. Which type of mistake is more costly to your company? How much error in each direction can you tolerate, affordably? These are questions you must consider.

Your models are useless if you don’t track how well they perform. Any time you generate a forecast, your model will not only give you a point forecast, but also a prediction interval associated with a given level of confidence. The point forecast is the midpoint of that prediction interval. Each time you generate a forecast, record the actual results. Did actuals fall within the prediction interval? If so, how close to the point forecast did they fall? If not, how far off were you?

As you keep track forecasts vs. actuals over time, determine how often your actuals fall within our outside your prediction intervals, and how close to the point forecast they are. If your forecasts are frequently far from your point estimate, especially near the upper or lower bounds of your prediction interval, that’s likely a sign that your model needs to be reworked. Indeed, model performance degrades over time. Technological advances, societal changes, changes in tastes, styles, and preferences, and random events can promote forecast error, because forecasting models are based on past data and assume that the future will continue to resemble the past.

Forecasting is as much an art as it is a science. And I hasten to add that the ability to forecast is like a muscle – you need to exercise it in order to strengthen it. Forecasts are never consistently perfect, but they can be frequently excellent. Don’t look to become a forecasting “guru.” It doesn’t last. Allow yourself to learn new things from every forecasting process you go through and each forecast evaluation you perform. And if you do that, becoming a great forecaster is in your forecast! And I can’t think of a better note on which to end the Forecast Friday series.

**********

Tell us what you thought of the Forecast Friday series!

We’ve been on a long road with Forecast Friday. I began the series last year because I believed that forecasting is an art that every business entity, or marketing, finance, production (etc.) professional could use to go far. Many of you have been tuning in to Forecast Friday each Thursday, so I would appreciate your honest feedback. Please leave comments. Let me know the topic(s) you found most helpful or useful. What could I have done better? What topic(s) should I have covered? Please don’t hold back. The purpose of Insight Central and Forecast Friday is to help you use analytics to advance your business and/or career.

Forecast Friday Will Resume April 14

April 6, 2011

I’ve been on assignment, and haven’t been able to devote time to writing this week’s Forecast Friday post, part one of “Evaluating Forecasts.”  So, what I will do is, next week,  write a complete post on the topic, and conclude the Forecast Friday series next week as planned.

Thanks for your patience and understanding, and for your continued interest in the Forecast Friday series.

Alex

Forecast Friday Topic: Does Combining Forecasts Work?

March 31, 2011

(Forty-second in a series)

Last week, we discussed three approaches to combining forecasts: a simple average, assigning weights inversely proportional to sum of squared error, and regression-based weights. We combine forecasts in order to incorporate the best features of each forecasting method used and to minimize the errors of each. But does combining forecasts work in practice? The literature over the years suggests that it does. Newbold and Bos (1994) summarize the research on the combination of forecasts below:

  1. Regardless of the forecasts combined or individual forecasting methods used in the composite, the combined forecast performs quite well, and is often superior to the individual forecasts;
  2. The simple average approach to combining forecasts performs very well;
  3. The weights inversely proportional to SSE generally performs better than regression-based weights, unless there’s just a small number of forecasts to be combined and some forecasts are much superior to others. In situations like those, regression-based combining methods tend to work better than simple averages and weights inversely proportional to SSE, or the worst forecasts are excluded from the composite.

Why does the combination of forecasts work? Makridakis, Wheelwright, and Hyndman (1998) provide four reasons. Generally, many forecasts can’t measure the very thing they desire. For example, it’s very hard to measure demand for a product or service, so companies measure billings, orders, etc., as proxies for demand. Because the use of proxies can introduce bias in forecasts, the combination of forecasts can reduce the impact of these biases. Secondly, errors in forecasting are inevitable, and some forecasts have errors that are much greater than others. Combining the forecasts can smooth out the forecast error. Moreover, time series can have patterns or relationships that are unstable or frequently changing. By combining forecasts, we can reduce the errors brought on by random events in forecasting. Finally, most forecasting models minimize the forecast errors for one-period ahead. Forecasts are often necessary for several periods ahead; yet the further into the future we aim to predict, the less accurate our forecasts. Combining forecasts helps to minimize the error of forecasts several periods ahead.

Whenever and wherever possible, organizations should try to generate forecasts via many different approaches and then derive a composite forecast. Different approaches touch on different functions within the organization and increase the representativeness of the real world factors under which it operates. When those factors are accounted for in the composite forecast, accurate predictions frequently emerge.

Next Forecast Friday Topic: Evaluating Forecasts – Part I

Next week, we will begin the first of two-part discussion on the evaluation of forecasts. Once we generate forecasts, we must evaluate them periodically. Model performance degrades over time and we must see how our models are performing and tweak or alter them, or remodel all together.

********************************************************

Follow us on Facebook and Twitter!

For the latest insights on marketing research, predictive modeling, and forecasting, be sure to check out Analysights on Facebook and Twitter! “Like-ing” us on Facebook and following us on Twitter will allow you to stay informed of each new Insight Central post published, new information about analytics, discussions Analysights will be hosting, and other opportunities for feedback. So check us out on Facebook and Twitter!

Forecast Friday Topic: Procedures for Combining Forecasts

March 24, 2011

(Forty-first in a series)

We have gone through a series of different forecasting approaches over the last several months. Many times, companies will have multiple forecasts generated for the same item, usually generated by different people across the enterprise, often using different methodologies, assumptions, and data collection processes, and typically for different business problems. Rarely is one forecasting method or forecast superior to another, especially over time. Hence, many companies will opt to combine the forecasts they generate into a composite forecast.

Considerable empirical evidence suggests that combining forecasts works very well in practice. If all the forecasts generated by the alternative approaches are unbiased, then that lack of bias carries over into the composite forecast, a desirable outcome to have.

Two common procedures for combining forecasts include simple averaging and assigning weights inversely proportional to the sum of squares error. We will discuss both procedures in this post.

Simple Average

The quickest, easiest way to combine forecasts is to simply take the forecasts generated by each method and average them. With a simple average, each forecasting method is given equal weight. So, if you are presented with the following five forecasts:

You’ll get the average of $83,000 as your composite forecast.

The simplicity and quickness of this procedure is its main advantage. However, the chief drawback is if information is known that individual methods consistently predict superiorly or inferiorly, that information is disregarded in the combination. Moreover, look at the wide variation in the forecasts above. The forecasts range from $50,000 to $120,000. Clearly, one or more of these methods’ forecasts will be way off. While the combination of forecasts can dampen the impact of forecast error, the outliers can easily skew the composite forecast. If you suspect one or more forecasts may be inferior to the others, you may just choose to exclude them and apply simple averaging to the forecasts for which you have some reasonable degree of confidence.

Assigning Weights in (Inverse) Proportion to Sum of Squared Errors

If you know the past performance of individual forecasting methods available to you, and you need to combine multiple forecasts, it’s likely you will want to assign greater weights to those forecast methods that have performed best. You will also want to allow the weighting scheme to adapt over time, since the relative performance of forecasting methods can change. One way to do that would be to assign weights to each forecast in based on their inverse proportion to the sum of squared forecast errors.

Let’s assume you have 12 months of sales data, actual (Xt), and three forecasting methods, each generating a forecast for each month (f1t, f2t, and f2t). Each of those three methods have also generated forecasts for month 13, which you are trying to predict. The table below shows these 12 months of actuals and forecasts, along with each method’s forecasts for month 13:

How much weight do you give each forecast? Calculate the sum squared error for each:

To get the weight of the one forecast method, you need to divide the sum of the other two methods’ squared errors by the total sum of the squared errors for all three methods, and then divide by 2 (3 methods minus 1). You must then do the same for the other two methods, in order to get the weights to sum to 1. Hence, the weights are as follows:

 

Notice that the higher weights are given to the forecast methods with the lowest sum of squared error. So, since each method generated a forecast for month 13, our composite forecast would be:

Hence, we would estimate approximately 795 as our composite forecast for month 13. When we obtain month 13’s actual sales, we would repeat this process for sum of squared errors from months 1-13 for each individual forecast, reassign the weights, and then apply them to each method’s forecasts for month 14. Also, notice the fraction ½ at the beginning of each weight equation. The denominator depends on the number of weights we are generating. In this case, we are generating three weights, so our denominator is (3-1)=2. If we would have used four methods, each weight equation above would have been one-third; and if we had only two methods, there would be no fraction, because it would be one.

Regression-Based Weights – Another Procedure

Another way to assign weights would be with regression, but that’s beyond the scope of this post. While the weighting approach above is simple, it’s also ad hoc. Regression-based weights can be much more theoretically correct. However, in most cases, you will not have many months of forecasts for estimating regression parameters. Also, you run the risk of autocorrelated errors, most certainly for forecasts beyond one step ahead. More information on regression-based weights can be found in Newbold & Bos, Introductory Business & Economic Forecasting, Second Edition, pp. 504-508.

Next Forecast Friday Topic: Effectiveness of Combining Forecasts

Next week, we’ll take a look at the effectiveness of combining forecasts, with a look at the empirical evidence that has been accumulated.

 ********************************************************

Follow us on Facebook and Twitter!

For the latest insights on marketing research, predictive modeling, and forecasting, be sure to check out Analysights on Facebook and Twitter! “Like-ing” us on Facebook and following us on Twitter will allow you to stay informed of each new Insight Central post published, new information about analytics, discussions Analysights will be hosting, and other opportunities for feedback. So check us out on Facebook and Twitter!

Forecast Friday Topic: Judgmental Bias in Forecasting

March 17, 2011

(Fortieth in a series)

Over the last several weeks, we have discussed many of the qualitative forecasting methods, approaches that rely heavily on judgment and less on analytical tools. Because judgmental forecasting techniques rely upon a person’s thought processes and experiences, they can be highly subjected to bias. Today, we will complete our coverage of judgmental forecasting methods with a discussion of some of the common biases they inspire.

Inconsistency and Conservatism

Two very opposite biases in judgmental forecasting are inconsistency and conservatism. Inconsistency occurs when decision-makers apply different decision criteria in similar situations. Sometimes memories fade; other times, a manager or decision-maker may overestimate the impact of some new or extraneous event that is occurring in the subsequent situation that makes it different from the previous; he/she could be influenced by his/her mood that day; or he/she just wants to try something new out of boredom. Inconsistency can have serious negative repercussions.

One way to overcome inconsistency is to have a set of formal decision rules, or “expert systems,” that set objective criteria for decision-making, which must be applied to each similar forecasting situation. These criteria would be the factors to measure, the weight each one gets, and the objective of the forecasting project. When formal decision rules are imposed and applied consistently, forecasts tend to improve. However, it is important to monitor your environment as your expert systems are applied, so that they can be changed as your market evolves. Otherwise, failing to change a process in light of strong new information or evidence is a new bias, conservatism.

Now, have I just contradicted myself? No. Learning must always be applied in any expert system. We live in a dynamic world, not a static one. However, most change to our environment, and hence our expert systems, doesn’t occur dramatically or immediately. Often, they occur gradually and more subtly. It’s important to apply your expert systems and practice them for time, monitoring anything else in the environment, as well as the quality of forecasts your expert systems are measuring. If the gap between your forecast and actual performance is growing consistently, then it might be time to revisit your criteria. Perhaps you assigned too much or too little weight to one or more factors; perhaps new technologies are being introduced in your industry.

Decision-makers walk a fine line between inconsistency and conservatism in judgmental forecasts. Trying to reduce one bias may inspire another.

Recency

Often, when there are shocks in the economy, or disasters, these recent events tend to dominate our thoughts about the future. We tend to believe these conditions are permanent, so we downplay or ignore relevant events from the past. So, to avoid recency bias, we must remember that business cycles exist, and that ups and downs don’t last forever. Moreover, we should still keep expert systems in place that force us to consider all factors relevant in forecasting the event of interest.

Optimism

I’m guilty of this bias! Actually, many people are. Our projections are often clouded by the future outcomes we desire. Sometimes, we feel compelled to provide rosy projections because of pressure by higher-up executives. Unfortunately, optimism in forecasting can be very dangerous, and its repercussions severe when it is discovered how different our forecasted vs. actual results are. Many a company’s stock price has plunged because of overly optimistic forecasts. The best ways to avoid optimism are to have a disinterested third party generate the forecasts; or have other individuals make their own independent forecasts.

************

These are just a sample of the biases common in judgmental forecasting methods. And as you’ve probably guessed, deciding which biases you’re able to live with and which you are not able to live with is also a subjective decision! In general, for your judgmental forecasts to be accurate, you must consistently guard against biases and have set procedures in place for decision-making, that include learning as you go along.

*************************************

Next Forecast Friday Topic: Combining Forecasts

For the last 10 months, I have introduced you to the various ways by which forecasts are generated and the strengths and limitations of each approach. Organizations frequently generate multiple forecasts based on different approaches, decision criteria, and different assumptions. Finding a way to combine these forecasts into a representative composite forecast for the organization, as well as evaluating each forecast is crucial to the learning process and, ultimately, the success of the organization. So, beginning with next week’s Forecast Friday post, we begin our final Forecast Friday mini-series on combining and evaluating forecasts.