*(Last in the series)*

We have finally come to the end of our almost year-long *Forecast Friday* journey. During this period, we have discussed various forecasting methods, including regression analysis, exponential smoothing, moving average methods, the basics of both ARIMA and logistic regression models. We also discussed qualitative, or judgmental, forecasting methods; we discussed how to diagnose your regression models for violations such as multicollinearity, autocorrelation, heteroscedasticity, and specification bias; and we discussed a series of other topics in forecasting, like the identification problem, leading economic indicators, calendar effects in forecasting, and the combination of forecasts. Now, we move on to the last part of the forecasting process: evaluating forecasts.

How well does your forecast model perform? That question should be the crux of your evaluation. This criterion relates to your company’s bottom line. You need to consider the costs to your company of forecasting too high and of forecasting too low. If you own a toy store and your sales forecasts for some stock-keeping units (SKUs) is too high, you risk marking down those items on clearance. On the other hand, if your forecast is too low, you risk running out of stock. Which type of mistake is more costly to your company? How much error in each direction can you tolerate, affordably? These are questions you must consider.

Your models are useless if you don’t track how well they perform. Any time you generate a forecast, your model will not only give you a point forecast, but also a prediction interval associated with a given level of confidence. The point forecast is the midpoint of that prediction interval. Each time you generate a forecast, record the actual results. Did actuals fall within the prediction interval? If so, how close to the point forecast did they fall? If not, how far off were you?

As you keep track forecasts vs. actuals over time, determine how often your actuals fall within our outside your prediction intervals, and how close to the point forecast they are. If your forecasts are frequently far from your point estimate, especially near the upper or lower bounds of your prediction interval, that’s likely a sign that your model needs to be reworked. Indeed, model performance degrades over time. Technological advances, societal changes, changes in tastes, styles, and preferences, and random events can promote forecast error, because forecasting models are based on past data and assume that the future will continue to resemble the past.

Forecasting is as much an art as it is a science. And I hasten to add that the ability to forecast is like a muscle – you need to exercise it in order to strengthen it. Forecasts are never consistently perfect, but they can be frequently excellent. Don’t look to become a forecasting “guru.” It doesn’t last. Allow yourself to learn new things from every forecasting process you go through and each forecast evaluation you perform. And if you do that, becoming a great forecaster is in your forecast! And I can’t think of a better note on which to end the *Forecast Friday *series.

**********

**Tell us what you thought of the Forecast Friday series!**

We’ve been on a long road with *Forecast Friday*. I began the series last year because I believed that forecasting is an art that every business entity, or marketing, finance, production (etc.) professional could use to go far. Many of you have been tuning in to *Forecast Friday* each Thursday, so I would appreciate your honest feedback. Please leave comments. Let me know the topic(s) you found most helpful or useful. What could I have done better? What topic(s) should I have covered? Please don’t hold back. The purpose of *Insight Central* and *Forecast Friday* is to help you use analytics to advance your business and/or career.