Posts Tagged ‘forecasts’

Forecast Friday Topic: Does Combining Forecasts Work?

March 31, 2011

(Forty-second in a series)

Last week, we discussed three approaches to combining forecasts: a simple average, assigning weights inversely proportional to sum of squared error, and regression-based weights. We combine forecasts in order to incorporate the best features of each forecasting method used and to minimize the errors of each. But does combining forecasts work in practice? The literature over the years suggests that it does. Newbold and Bos (1994) summarize the research on the combination of forecasts below:

  1. Regardless of the forecasts combined or individual forecasting methods used in the composite, the combined forecast performs quite well, and is often superior to the individual forecasts;
  2. The simple average approach to combining forecasts performs very well;
  3. The weights inversely proportional to SSE generally performs better than regression-based weights, unless there’s just a small number of forecasts to be combined and some forecasts are much superior to others. In situations like those, regression-based combining methods tend to work better than simple averages and weights inversely proportional to SSE, or the worst forecasts are excluded from the composite.

Why does the combination of forecasts work? Makridakis, Wheelwright, and Hyndman (1998) provide four reasons. Generally, many forecasts can’t measure the very thing they desire. For example, it’s very hard to measure demand for a product or service, so companies measure billings, orders, etc., as proxies for demand. Because the use of proxies can introduce bias in forecasts, the combination of forecasts can reduce the impact of these biases. Secondly, errors in forecasting are inevitable, and some forecasts have errors that are much greater than others. Combining the forecasts can smooth out the forecast error. Moreover, time series can have patterns or relationships that are unstable or frequently changing. By combining forecasts, we can reduce the errors brought on by random events in forecasting. Finally, most forecasting models minimize the forecast errors for one-period ahead. Forecasts are often necessary for several periods ahead; yet the further into the future we aim to predict, the less accurate our forecasts. Combining forecasts helps to minimize the error of forecasts several periods ahead.

Whenever and wherever possible, organizations should try to generate forecasts via many different approaches and then derive a composite forecast. Different approaches touch on different functions within the organization and increase the representativeness of the real world factors under which it operates. When those factors are accounted for in the composite forecast, accurate predictions frequently emerge.

Next Forecast Friday Topic: Evaluating Forecasts – Part I

Next week, we will begin the first of two-part discussion on the evaluation of forecasts. Once we generate forecasts, we must evaluate them periodically. Model performance degrades over time and we must see how our models are performing and tweak or alter them, or remodel all together.

********************************************************

Follow us on Facebook and Twitter!

For the latest insights on marketing research, predictive modeling, and forecasting, be sure to check out Analysights on Facebook and Twitter! “Like-ing” us on Facebook and following us on Twitter will allow you to stay informed of each new Insight Central post published, new information about analytics, discussions Analysights will be hosting, and other opportunities for feedback. So check us out on Facebook and Twitter!

Advertisements

Forecast Friday Topic: Procedures for Combining Forecasts

March 24, 2011

(Forty-first in a series)

We have gone through a series of different forecasting approaches over the last several months. Many times, companies will have multiple forecasts generated for the same item, usually generated by different people across the enterprise, often using different methodologies, assumptions, and data collection processes, and typically for different business problems. Rarely is one forecasting method or forecast superior to another, especially over time. Hence, many companies will opt to combine the forecasts they generate into a composite forecast.

Considerable empirical evidence suggests that combining forecasts works very well in practice. If all the forecasts generated by the alternative approaches are unbiased, then that lack of bias carries over into the composite forecast, a desirable outcome to have.

Two common procedures for combining forecasts include simple averaging and assigning weights inversely proportional to the sum of squares error. We will discuss both procedures in this post.

Simple Average

The quickest, easiest way to combine forecasts is to simply take the forecasts generated by each method and average them. With a simple average, each forecasting method is given equal weight. So, if you are presented with the following five forecasts:

You’ll get the average of $83,000 as your composite forecast.

The simplicity and quickness of this procedure is its main advantage. However, the chief drawback is if information is known that individual methods consistently predict superiorly or inferiorly, that information is disregarded in the combination. Moreover, look at the wide variation in the forecasts above. The forecasts range from $50,000 to $120,000. Clearly, one or more of these methods’ forecasts will be way off. While the combination of forecasts can dampen the impact of forecast error, the outliers can easily skew the composite forecast. If you suspect one or more forecasts may be inferior to the others, you may just choose to exclude them and apply simple averaging to the forecasts for which you have some reasonable degree of confidence.

Assigning Weights in (Inverse) Proportion to Sum of Squared Errors

If you know the past performance of individual forecasting methods available to you, and you need to combine multiple forecasts, it’s likely you will want to assign greater weights to those forecast methods that have performed best. You will also want to allow the weighting scheme to adapt over time, since the relative performance of forecasting methods can change. One way to do that would be to assign weights to each forecast in based on their inverse proportion to the sum of squared forecast errors.

Let’s assume you have 12 months of sales data, actual (Xt), and three forecasting methods, each generating a forecast for each month (f1t, f2t, and f2t). Each of those three methods have also generated forecasts for month 13, which you are trying to predict. The table below shows these 12 months of actuals and forecasts, along with each method’s forecasts for month 13:

How much weight do you give each forecast? Calculate the sum squared error for each:

To get the weight of the one forecast method, you need to divide the sum of the other two methods’ squared errors by the total sum of the squared errors for all three methods, and then divide by 2 (3 methods minus 1). You must then do the same for the other two methods, in order to get the weights to sum to 1. Hence, the weights are as follows:

 

Notice that the higher weights are given to the forecast methods with the lowest sum of squared error. So, since each method generated a forecast for month 13, our composite forecast would be:

Hence, we would estimate approximately 795 as our composite forecast for month 13. When we obtain month 13’s actual sales, we would repeat this process for sum of squared errors from months 1-13 for each individual forecast, reassign the weights, and then apply them to each method’s forecasts for month 14. Also, notice the fraction ½ at the beginning of each weight equation. The denominator depends on the number of weights we are generating. In this case, we are generating three weights, so our denominator is (3-1)=2. If we would have used four methods, each weight equation above would have been one-third; and if we had only two methods, there would be no fraction, because it would be one.

Regression-Based Weights – Another Procedure

Another way to assign weights would be with regression, but that’s beyond the scope of this post. While the weighting approach above is simple, it’s also ad hoc. Regression-based weights can be much more theoretically correct. However, in most cases, you will not have many months of forecasts for estimating regression parameters. Also, you run the risk of autocorrelated errors, most certainly for forecasts beyond one step ahead. More information on regression-based weights can be found in Newbold & Bos, Introductory Business & Economic Forecasting, Second Edition, pp. 504-508.

Next Forecast Friday Topic: Effectiveness of Combining Forecasts

Next week, we’ll take a look at the effectiveness of combining forecasts, with a look at the empirical evidence that has been accumulated.

 ********************************************************

Follow us on Facebook and Twitter!

For the latest insights on marketing research, predictive modeling, and forecasting, be sure to check out Analysights on Facebook and Twitter! “Like-ing” us on Facebook and following us on Twitter will allow you to stay informed of each new Insight Central post published, new information about analytics, discussions Analysights will be hosting, and other opportunities for feedback. So check us out on Facebook and Twitter!

Forecast Friday Topic: Other Judgmental Forecasting Methods

March 3, 2011

(Thirty-ninth in a series)

Over the last several weeks, we discussed a series of different non-quantitative forecasting methods: Delphi Method, Jury of Executive Opinion, Sales Force Composite Forecasts, and Surveys of Expectations. In today’s brief post, we’ll finish with a brief discussion of three more judgmental forecasting methods: Scenario Writing, La Prospective, and Cross-Impact Analysis.

Scenario Writing

When a company’s or industry’s long-term future is far too difficult to predict (whose isn’t!), it is common for experts in that company or industry to ponder over possible situations in which the company or industry may find itself in the distant future. The documentation of these situations – scenarios – is known as scenario writing. Scenario writing seeks to get managers thinking in terms of possible outcomes at a future time where quantitative forecasting methods may be inadequate for forecasting. Unfortunately, much literature on this approach suggests that writing multiple scenarios does not have much better quality over any of the other judgmental forecasting methods we’ve discussed to date.

La Prospective

Developed in France, La Prospective eschews quantitative models and emphasizes several potential futures that may result from the activities of individuals. Interaction among several events, many of which can be, and indeed are, dynamic in structure and constantly evolving, are studied and their impacts are cross-analyzed, and their effect on the future is assessed. La Prospective devotes considerable attention to the power, strategies, and resources of the individual “agents” whose actions will influence the future. Because the different components being analyzed can be dynamic, the forecasting process for La Prospective is often not linear; stages can progress in different or simultaneous order. And the company doing the forecasting may also be one of the influential agents involved. This helps companies assess the value of any actions the company might take. After the La Prospective process is complete, scenarios of the future are written, from which the company can formulate strategies.

Cross-Impact Analysis

Cross-Impact analysis seeks to account for the interdependence of uncertain future events. Quite often, a future event occurring can be caused or determined by the occurrence of another event. And often, an analyst may have strong knowledge of one event, and little or no knowledge about the others. For example, in trying to predict the future price of tissue, experts at companies like Kimberly-Clark, along with resource economists, forest experts, and conservationists may all have useful views. If a country that has vast acreages of timber imposes more stringent regulations on the cutting down of trees, that can result in sharp increases in the price of tissue. Moreover, if there is a major increase, or even a sharp reduction, in the incidence of influenza or of the common cold – the realm of epidemiologists – that too can influence the price of tissue. And even the current tensions in the Middle East – the realm of foreign policy experts – can affect the price of tissue. If tensions in the Middle East exacerbate, the price of oil shoots up, driving up the price of the energy required to convert the timber into paper, and also the price of gas to transport the timber to the paper mill and the tissue to the wholesalers and to the retailer. Cross-impact analysis measures the likelihood that each of these events will occur and attempts to assess the impact they will have on the future of the event of interest.

Next Forecast Friday Topic: Judgmental Bias in Forecasting

Now that we have discussed several of the judgmental forecasting techniques available to analysts, it is obvious that, unlike quantitative methods, these techniques are not objective. Because, as their name implies, judgmental forecasting methods are based on judgment, they are highly susceptible to biases. Next week’s Forecast Friday post will discuss some of the biases that can result from judgmental forecasting methods.

Forecast Friday Topic: Judgmental Extrapolation

February 3, 2011

(Thirty-sixth in a series)

The forecasting methods we have discussed since the start of the Forecast Friday series have been quantitative. Formal quantitative models are often quite useful for predicting the near future, as the recent past often indicates expected results for the future. However, things change over time. While predictive models might be useful in forecasting the number of visits to your Web site next month, they may be less relevant to predicting your company’s social media patterns five or 10 years from now. Technology is likely to change dramatically during that time. Hence, more qualitative, or judgmental, forecasts are often required. Thus begins the next section of our series: Judgmental Methods in Forecasting.

Yet even with short-run forecasting, human judgment should be a part of the forecasts. A time series model can’t explain why a pattern is happening; it can only make predictions based on the patterns in the series it has “learned.” It cannot take into account the current environment in which those numbers came about, or information some experts in the field have about events likely to occur. Hence, forecasts by models should never be the “be-all, end-all.”

Essentially, there are two types of judgmental forecasting: subject matter expertise, which we will discuss in next week’s post, and judgmental extrapolation, which is today’s topic. Judgmental extrapolation – also known as bold freehand extrapolation is the crudest form of judgmental forecasting, and there’s really no expertise required to do it. Judgmental extrapolation is simply looking at the graph of a time series and making projections based upon visual inspection. That’s all there is to it; no understanding of the physical process behind the time series is required.

The advantage of judgmental extrapolation (the only one I could find, anyway) is its efficiency: it doesn’t require a lot of time, effort, understanding of the series, or money. But that’s efficiency, not accuracy! Sometimes when time and money are short, judgmental extrapolation is sometimes the only way to go. But if you have a time series already, you might get better results just plugging them into Excel and using its exponential smoothing or regression tools – and even that is relatively time and cost efficient.

Unfortunately, there’s no definitive findings from the published literature on the accuracy of judgmental extrapolation. I tend to be among its skeptics. Perhaps the strongest finding I’ve seen for the accuracy of judgmental forecasts (and it’s not really an argument in favor!) is that, when shown graphs of forecasts, individuals can adjust them in ways that improve the forecasts, but only if the forecasts themselves are far from optimal! That was the finding of T. R. Willemain, in a 1991 article in the International Journal of Forecasting.

So why do I mention judgmental extrapolation? As I said before, sometimes you need to make decisions quickly and without resources or adequate information. What’s more, judgmental extrapolation’s value – though not proven – has also not been disproven. Until its value is disproven, judgmental extrapolation should be considered another tool in the forecasting arsenal.

Next Forecast Friday Topic: Expert Judgment

Today we talked about forecasts relying upon non-expert judgment. Next week, we’ll talk about judgmental forecasts that are based on the opinion of subject matter experts.

********************************************************

Follow us on Facebook and Twitter!

For the latest insights on marketing research, predictive modeling, and forecasting, be sure to check out Analysights on Facebook and Twitter! “Like-ing” us on Facebook and following us on Twitter will allow you to stay informed of each new Insight Central post published, new information about analytics, discussions Analysights will be hosting, and other opportunities for feedback. So check us out on Facebook and Twitter!

Forecast Friday Topic: Calendar Effects in Forecasting

December 16, 2010

(Thirty-third in a series)

It is a common practice to compare a particular point in time to its equivalent one or two years ago. Companies often report their earnings and revenues for the first quarter of this year with respect to the first quarter of last year to see if there’s been any improvement or deterioration since then. Retailers want to know if December 2010 sales were higher or lower than December 2009 and even December 2008 sales. Sometimes, businesses want to see how sales compared for October, November, and December. While these approaches seem straightforward, the way the calendar falls can create misleading comparisons and faulty forecasts.

Every four years, February has 29 days instead of the usual 28. That extra day can cause problems in forecasting February sales. In some years, Easter falls in April, and other years March. This can cause forecasting nightmares for confectioners, greeting cards manufacturers, and retailers alike. In some years, a given month might have five Fridays and/or Saturdays, and just four in other years. If your business’ sales are much higher on the weekend, these can generate significant forecast error.

Adjusting for Month Length

Some months have as many as 31 days, others 30, while February 28 or 29. Because the variation in the calendar can cause variation in the time series, it is necessary to make adjustments. If you do not adjust for variation in the length of the month, the effects can show up as a seasonal effect, which may not cause serious forecast errors, but will certainly make it difficult to interpret any seasonal patterns. You can easily adjust for month length:

Where Wt is the weighted value of your dependent variable for that month. Hence, if you had sales of $100,000 in February and $110,000 in March, you would first start with the numerator. There’s 365.25 days in a (non-leap) year. Divide that by 12. That means the numerator will be 30.44. Divide that by the number of days in each of those months to get adjustment factors for each month. So, for February, you’d divide 30.44 by 28 and get an adjustment factor of 1.09; for March, you would divide by 31 and get an adjustment factor of .98. Then you would multiply those factors by their respective months. Hence, your weighted sales for February would be $109,000, and for March approximately $108,000. Although sales appear to be higher in March than in February, once you adjust for month length, you find that the two months actually were about the same in terms of volume.

Adjusting for Trading Days

As described earlier, months can have four or five occurrences of the same day. As a result, a month may have more trading days in one year than they do in the next. This can cause problems in retail sales and banking. If a month has five Sundays in it, and Sunday is a non-trading day (as is the case in banking) you must account for it. Unlike month-length adjustments, where differences in length from one month to the next are obvious, trading day adjustments aren’t always precise, as their variance is not as predictable.

In the simplest cases, your approach can be similar to that of the formula above, only you’re dividing the number of trading days in an average month by the number of trading days in a given month. However, that can be misleading.

Many analysts also rely on other approaches to adjust for trading days in regression analysis: seasonal dummy variables (which we discussed earlier this year); creating independent variables that denote the number of times each day of the week occurred in that month; and a dummy variable for Easter (having a value of 1 in either March or April, depending on when it fell, and 0 in the non-Easter month).

Adjusting for calendar and trading day effects is crucial to effective forecasting and discernment of seasonal patterns.

Forecast Friday Resumes January 6, 2011

Forecast Friday will not be published on December 23 and December 30, in observance of Christmas and New Year’s, but will resume on January 6, 2011. When we resume on that day, we will begin a six-week miniseries on autoregressive integrated moving average (ARIMA) models in forecasting. This six-week series will round out all of our discussions on quantitative forecasting techniques, after which we will begin discussing judgmental forecasts for five weeks, followed by a four week capstone tying together everything we’ve discussed. There’s much to look forward to in the New Year.

*************************

Be Sure to Follow us on Facebook and Twitter !

Thanks to all of you, Analysights now has over 200 fans on Facebook … and we’d love more! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! And if you like us that much, please also pass these posts on to your friends who like forecasting and invite them to “Like” Analysights! By “Like-ing” us on Facebook, you and they will be informed every time a new blog post has been published, or when new information comes out. Check out our Facebook page! You can also follow us on Twitter. Thanks for your help!