Posts Tagged ‘forecasting methods’

Forecast Friday Topic: Judgmental Extrapolation

February 3, 2011

(Thirty-sixth in a series)

The forecasting methods we have discussed since the start of the Forecast Friday series have been quantitative. Formal quantitative models are often quite useful for predicting the near future, as the recent past often indicates expected results for the future. However, things change over time. While predictive models might be useful in forecasting the number of visits to your Web site next month, they may be less relevant to predicting your company’s social media patterns five or 10 years from now. Technology is likely to change dramatically during that time. Hence, more qualitative, or judgmental, forecasts are often required. Thus begins the next section of our series: Judgmental Methods in Forecasting.

Yet even with short-run forecasting, human judgment should be a part of the forecasts. A time series model can’t explain why a pattern is happening; it can only make predictions based on the patterns in the series it has “learned.” It cannot take into account the current environment in which those numbers came about, or information some experts in the field have about events likely to occur. Hence, forecasts by models should never be the “be-all, end-all.”

Essentially, there are two types of judgmental forecasting: subject matter expertise, which we will discuss in next week’s post, and judgmental extrapolation, which is today’s topic. Judgmental extrapolation – also known as bold freehand extrapolation is the crudest form of judgmental forecasting, and there’s really no expertise required to do it. Judgmental extrapolation is simply looking at the graph of a time series and making projections based upon visual inspection. That’s all there is to it; no understanding of the physical process behind the time series is required.

The advantage of judgmental extrapolation (the only one I could find, anyway) is its efficiency: it doesn’t require a lot of time, effort, understanding of the series, or money. But that’s efficiency, not accuracy! Sometimes when time and money are short, judgmental extrapolation is sometimes the only way to go. But if you have a time series already, you might get better results just plugging them into Excel and using its exponential smoothing or regression tools – and even that is relatively time and cost efficient.

Unfortunately, there’s no definitive findings from the published literature on the accuracy of judgmental extrapolation. I tend to be among its skeptics. Perhaps the strongest finding I’ve seen for the accuracy of judgmental forecasts (and it’s not really an argument in favor!) is that, when shown graphs of forecasts, individuals can adjust them in ways that improve the forecasts, but only if the forecasts themselves are far from optimal! That was the finding of T. R. Willemain, in a 1991 article in the International Journal of Forecasting.

So why do I mention judgmental extrapolation? As I said before, sometimes you need to make decisions quickly and without resources or adequate information. What’s more, judgmental extrapolation’s value – though not proven – has also not been disproven. Until its value is disproven, judgmental extrapolation should be considered another tool in the forecasting arsenal.

Next Forecast Friday Topic: Expert Judgment

Today we talked about forecasts relying upon non-expert judgment. Next week, we’ll talk about judgmental forecasts that are based on the opinion of subject matter experts.

********************************************************

Follow us on Facebook and Twitter!

For the latest insights on marketing research, predictive modeling, and forecasting, be sure to check out Analysights on Facebook and Twitter! “Like-ing” us on Facebook and following us on Twitter will allow you to stay informed of each new Insight Central post published, new information about analytics, discussions Analysights will be hosting, and other opportunities for feedback. So check us out on Facebook and Twitter!

Advertisements

Forecast Friday Topic: Simple Regression Analysis

May 27, 2010

(Sixth in a series)

Today, we begin our discussion of regression analysis as a time series forecasting tool. This discussion will take the next few weeks, as there is much behind it. As always, I will make sure everything is simplified and easy for you to digest. Regression is a powerful tool that can be very helpful for mid- and long-range forecasting. Quite often, the business decisions we make require us to consider relationships between two or more variables. Rarely can we make changes to our promotion, pricing, and/or product development strategies without them having an impact of some kind on our sales. Just how big an impact would that be? How do we measure the relationship between two or more variables? And does a real relationship even exist between those variables? Regression analysis helps us find out.

One thing I must point out: Remember the “deviations” we discussed in the posts on moving average and exponential smoothing techniques: The difference between the forecasted and actual values for each observation, of which we took the absolute value? Good. In regression analysis, we refer to the deviations as the “error terms” or “residuals.” In regression analysis, the residuals – which we will square, rather than take the absolute value – become very important in gauging the regression model’s accuracy, validity, efficiency, and “goodness of fit.”

Simple Linear Regression Analysis

Sue Stone, owner of Stone & Associates, looked at her CPA practice’s monthly receipts from January to December 2009. The sales were as follows:

Month 

Sales 

January 

$10,000 

February 

$11,000 

March 

$10,500 

April 

$11,500 

May 

$12,500 

June 

$12,000 

July 

$14,000 

August 

$13,000 

September 

$13,500 

October 

$15,000 

November

$14,500 

December 

$15,500 

Sue is trying to predict what sales will be for each month in the first quarter of 2010, but is unsure of how to go about it. Moving average and exponential smoothing techniques rarely go more than one period ahead. So, what is Sue to do?

When we are presented with a set of numbers, one of the ways we try to make sense of it is by taking its average. Perhaps Sue can average all 12 months’ sales – $12,750 – and use that her forecast for each of next three months. But how accurately would that measure each month of 2009? How spread out are each month’s sales from the average? Sue subtracts the average from each month’s sales and examines the difference:

Month 

Sales 

Sales Less Average Sales 
January 

$10,000 

-$2,750 

February

$11,000 

-$1,750 

March 

$10,500 

-$2,250 

April 

$11,500 

-$1,250 

May 

$12,500 

-$250 

June 

$12,000 

-$750 

July 

$14,000 

$1,250 

August 

$13,000 

$250 

September 

$13,500 

$750 

October 

$15,000 

$2,250 

November 

$14,500 

$1,750 

December 

$15,500 

$2,750 

 

Sue notices that the error between actual and average is quite high in both the first four months of 2009 and in the last three months of 2009. She wants to understand the overall error in using the average as a forecast of sales. However, when she sums up all the errors from month to month, Sue finds they sum to zero. That tells her nothing. So she squares each month’s error value and sums them:

Month 

Sales 

Error 

Error Squared 

January 

$10,000 

-$2,750 

$7,562,500 

February 

$11,000 

-$1,750 

$3,062,500 

March 

$10,500

-$2,250 

$5,062,500 

April 

$11,500 

-$1,250 

$1,562,500 

May 

$12,500 

-$250 

$62,500 

June 

$12,000 

-$750 

$562,500 

July 

$14,000 

$1,250 

$1,562,500 

August 

$13,000 

$250 

$62,500 

September 

$13,500 

$750 

$562,500 

October 

$15,000 

$2,250 

$5,062,500 

November 

$14,500

$1,750 

$3,062,500 

December 

$15,500 

$2,750 

$7,562,500 

   

Total Error: 

$35,750,000 

    

In totaling these squared errors, Sue derives the total sum of squares, or TSS error: 35,750,000. Is there any way she can improve upon that? Sue thinks for a while. She doesn’t know too much more about her 2009 sales except for the month in which they were generated. She plots the sales on a chart:

Sue notices that sales by month appear to be on an upward trend. Sue thinks for a moment. “All I know is the sales and the month,” she says to herself, “How can I develop a model to forecast accurately?” Sue reads about a statistical procedure called regression analysis and, seeing that each month’s sales is in sequential order, she wonders whether the mere passage of time simply causes sales to go higher. Sue numbers each month, with January assigned a 1 and December, a 12.

She also realizes that she is trying to predict sales with each passing month. Hence, she hypothesizes that the change in sales depends on the change in the month. Hence, sales is Sue’s dependent variable. Because the month number is used to estimate change in sales, it is her independent variable. In regression analysis, the relationship between an independent and a dependent value is expressed:

Y = α + βX + ε

    Where: Y is the value of the dependent variable

    X is the value of the independent variable

    α is a population parameter, called the intercept, which would be the value of Y when X=0

    β is also a population parameter – the slope of the regression line – representing the change in Y associated with each one-unit change in X.

    ε is the error term.

Sue further reads that the goal of regression analysis is to minimize the error sum of squares, which is why it is referred to as ordinary least squares (OLS) regression. She also notices that she is building her regression on a sample, so there is a sample regression equation used to estimate what the true regression is for the population:

Essentially, the equation is the same as the one above, however the terms indicate the sample. The Y-term (called “Y hat”) is the sample forecasted value of the dependent variable (sales) at period i; a is the sample estimate of α; b is the sample estimate of β; Xi is the value of the independent variable at period i; and ei is the error, or difference between Y hat (the forecasted value) and actual Y for period i. Sue needs to find the values for a and b – the estimates of the population parameters – that minimize the error sum of squares.

Sue reads that the equations for estimating a and b are derived from calculus, but expressed algebraically as:

Sue learns that the X and Y terms with lines above them, known as “X bar” and “Y bar,” respectively are the averages of all the X and Y values, respectively. She also reads that the Σ notation – the Greek letter sigma – represents a sum. Hence, Sue realizes a few things:

  1. She must estimate b before she can estimate a;
  2. To estimate b,she must take care of the numerator:
    1. first subtract each observation’s month number from the average month’s number (X minus X-bar),
    2. subtract each observation’s sales from the average sales (Y minus Y-bar),
    3. multiply those two together, and
    4. Add up (2c) for all observations.
  3. To get the denominator for calculating b, she must:
    1. Again subtract X-bar from X, but then square the difference, for each observation.
    2. Sum them up
  4. Calculating b is easy: She needs only to divide the result from (2) by the result from (3).
  5. Calculating a is also easy: She multiplies her b value by the average month (X-bar), and subtracts it from average sales (Y-bar).

Sue now goes to work to compute her regression equation. She goes into Excel and enters her monthly sales data in a table, and computes the averages for sales and month number:

 

Month (X) 

Sales (Y) 

 

1 

$10,000 

 

2 

$11,000 

 

3 

$10,500 

 

4 

$11,500 

 

5 

$12,500 

 

6 

$12,000 

 

7 

$14,000 

 

8 

$13,000 

 

9 

$13,500 

 

10 

$15,000 

 

11 

$14,500 

 

12 

$15,500 

Average 

6.5 

$12,750 

 

Sue goes ahead and subtracts the X and Y values from their respective averages, and computes the components she needs (the “Product” is the result of multiplying the values in the first two columns together):

X minus X-bar 

Y minus Y-bar 

Product 

(X minus X-bar) Squared 

-5.5 

-$2,750 

$15,125 

30.25 

-4.5 

-$1,750 

$7,875 

20.25 

-3.5 

-$2,250 

$7,875 

12.25 

-2.5 

-$1,250 

$3,125 

6.25 

-1.5 

-$250 

$375 

2.25 

-0.5 

-$750 

$375 

0.25 

0.5 

$1,250 

$625 

0.25 

1.5 

$250 

$375 

2.25 

2.5 

$750 

$1,875 

6.25 

3.5 

$2,250 

$7,875 

12.25 

4.5 

$1,750 

$7,875

20.25 

5.5 

$2,750 

$15,125 

30.25 

Total 

$68,500 

143 

 

Sue computes b:

b = $68,500/143

= $479.02

Now that Sue knows b, she calculates a:

a = $12,750 – $479.02(6.5)

= $12,750 – $3,113.64

= $9,636.36

Hence, assuming errors are zero, Sue’s least-squares regression equation is:

Y(hat) =$9,636.36 + $479.02X

Or, in business terminology:

Forecasted Sales = $9,636.36 + $479.02 * Month number.

This means that each passing month is associated with an average increase in sales of $479.02 for Sue’s CPA firm. How accurately does this regression model predict sales? Sue estimates the error by plugging each month’s number into the equation and then comparing her forecast for that month with the actual sales:

Month (X) 

Sales (Y) 

Forecasted Sales 

Error 

1 

$10,000 

$10,115.38

-$115.38 

2 

$11,000 

$10,594.41 

$405.59 

3 

$10,500 

$11,073.43 

-$573.43 

4 

$11,500 

$11,552.45 

-$52.45 

5 

$12,500 

$12,031.47 

$468.53 

6 

$12,000 

$12,510.49 

-$510.49 

7 

$14,000 

$12,989.51 

$1,010.49 

8 

$13,000 

$13,468.53 

-$468.53 

9 

$13,500 

$13,947.55 

-$447.55

10 

$15,000 

$14,426.57 

$573.43 

11 

$14,500 

$14,905.59 

-$405.59 

12 

$15,500 

$15,384.62 

$115.38 

 

Sue’s actual and forecasted sales appear to be pretty close, except for her July estimate, which is off by a little over $1,000. But does her model predict better than if she simply used average sales as her forecast for each month? To do that, she must compute the error sum of squares, ESS, error. Sue must square the error terms for each observation and sum them up to obtain ESS:

ESS = Σe2

Error 

Squared Error 

-$115.38 

$13,313.61 

$405.59 

$164,506.82 

-$573.43 

$328,818.04 

-$52.45 

$2,750.75 

$468.53 

$219,521.74 

-$510.49 

$260,599.54 

$1,010.49 

$1,021,089.05 

-$468.53 

$219,521.74 

-$447.55 

$200,303.19 

$573.43 

$328,818.04 

-$405.59 

$164,506.82 

$115.38 

$13,313.61 

ESS=

$2,937,062.94 

 

Notice Sue’s error sum of squares. This is the error, or unexplained, sum of squared deviations between the forecasted and actual sales. The difference between the total sum of squares (TSS) and the Error Sum of Squares (ESS) is the regression sum of squares, RSS, and that is the sum of squared deviations that are explained by the regression. RSS is also calculated as each forecasted value of sales less the average of sales:

Forecasted Sales 

Average Sales

Regression Error 

Reg. Error Squared 

$10,115.38 

$12,750 

-$2,634.62 

$6,941,198.22 

$10,594.41 

$12,750 

-$2,155.59 

$4,646,587.24 

$11,073.43 

$12,750 

-$1,676.57 

$2,810,898.45 

$11,552.45 

$12,750 

-$1,197.55 

$1,434,131.86 

$12,031.47 

$12,750 

-$718.53

$516,287.47 

$12,510.49 

$12,750 

-$239.51 

$57,365.27 

$12,989.51 

$12,750 

$239.51 

$57,365.27 

$13,468.53 

$12,750 

$718.53 

$516,287.47 

$13,947.55 

$12,750 

$1,197.55 

$1,434,131.86 

$14,426.57 

$12,750 

$1,676.57 

$2,810,898.45 

$14,905.59 

$12,750 

$2,155.59 

$4,646,587.24

$15,384.62 

$12,750 

$2,634.62 

$6,941,198.22 

   

RSS= 

$32,812,937.06 

 

Sue immediately adds the RSS and the ESS and sees they match the TSS: $35,750,000. She also knows that nearly 33 million of that TSS is explained by her regression model, so she divides her RSS by the TSS:

32,812,937.06 / 35,750,000

=.917 or 91.7%

This quotient, known as the coefficient of determination, and denoted as R2, tells Sue that each passing month explains 91.7% of the change in monthly sales that she experiences. What R2 means is that Sue improved her forecast accuracy by 91.7% by using this simple model instead of the simple average. As you will find out in subsequent blog posts, maximizing R2 isn’t the “be all and end all”. In fact, there is still much to do with this model, which will be discussed in next week’s Forecast Friday post. But for now, Sue’s model seems to have reduced a great deal of error.

It is important to note that while each month does seem to be related to sales, the passing months do not cause the increase in sales. Correlation does not mean causation. There could be something behind the scenes (e.g., Sue’s advertising, or the types of projects she works on, etc.) that is driving the upward trend in her sales.

Using the Regression Equation to Forecast Sales

Now Sue can use the same model to forecast sales for January 2010 and February 2010, etc. She has her equation, so since January 2010 is period 13, she plugs in 13 for X, and gets a forecast of $15,863.64; for February (period 14), she gets $16,342.66.

Recap and Plan for Next Week

You have now learned the basics of simple regression analysis. You have learned how to estimate the parameters for the regression equation, how to measure the improvement in accuracy from the regression model, and how to generate forecasts. Next week, we will be checking the validity of Sue’s equation, and discussing the important assumptions underlying regression analysis. Until then, you have a basic overview of what regression analysis is.

Forecast Friday Topic: Exponential Smoothing Methods

May 13, 2010

(Fourth in a series)

In last week’s Forecast Friday post, we discussed moving average forecasting methods, both simple and weighted. When a time series is stationary, that is, exhibits no discernable trend or seasonality and is subject only to the randomness of everyday existence, then moving average methods – or even a simple average of the entire series – are useful for forecasting the next few periods. However, most time series are anything but stationary: retail sales have trend, seasonal, and cyclical elements, while public utilities have trend and seasonal components that impact the usage of electricity and heat. Hence, moving average forecasting approaches may provide less than desirable results. Moreover, the most recent sales figures typically are more indicative of future sales, so there is often a need to have a forecasting system that places greater weight on more recent observations. Enter exponential smoothing.

Unlike moving average models, which use a fixed number of the most recent values in the time series for smoothing and forecasting, exponential smoothing incorporates all values time series, placing the heaviest weight on the current data, and weights on older observations that diminish exponentially over time. Because of the emphasis on all previous periods in the data set, the exponential smoothing model is recursive. When a time series exhibits no strong or discernable seasonality or trend, the simplest form of exponential smoothing – single exponential smoothing – can be applied. The formula for single exponential smoothing is:

Ŷt+1 = αYt + (1-α) Ŷt

In this equation, Ŷt+1 represents the forecast value for period t + 1; Yt is the actual value of the current period, t; Ŷt is the forecast value for the current period, t; and α is the smoothing constant, or alpha, a number between 0 and 1. Alpha is the weight you assign to the most recent observation in your time series. Essentially, you are basing your forecast for the next period on the actual value for this period, and the value you forecasted for this period, which in turn was based on forecasts for periods before that.

Let’s assume you’ve been in business for 10 weeks and want to forecast sales for the 11th week. Sales for those first 10 weeks are:

Week (t)

Sales (Yt)

1

200

2

215

3

210

4

220

5

230

6

220

7

235

8

215

9

220

10

210

From the equation above, you know that in order to come up with a forecast for week 11, you need forecasted values for weeks 10, 9, and all the way down to week 1. You also know that week 1 does not have any preceding period, so it cannot be forecasting. And, you need to determine the smoothing constant, or alpha, to use for your forecasts.

Determining the Initial Forecast

The first step in constructing your exponential smoothing model is to generate a forecast value for the first period in your time series. The most common practice is to set the forecasted value of week 1 equal to the actual value, 200, which we will do in our example. Another approach would be that if you have prior sales data to this, but are not using it in your construction of the model, you might take an average of a couple of immediately prior periods and use that as the forecast. How you determine your initial forecast is subjective.

How Big Should Alpha Be?

This too is a judgment call, and finding the appropriate alpha is subject to trial and error. Generally, if your time series is very stable, a small α is appropriate. Visual inspection of your sales on a graph is also useful in trying to pinpoint an alpha to start with. Why is the size of α important? Because the closer α is to 1, the more weight that is assigned to the most recent value in determining your forecast, the more rapidly your forecast adjusts to patterns in your time series and the less smoothing that occurs. Likewise, the closer α is to 0, the more weight that is placed on earlier observations in determining the forecast, the more slowly your forecast adjusts to patterns in the time series, and the more smoothing that occurs. Let’s visually inspect the 10 weeks of sales:

The Exponential Smoothing Process

The sales appear somewhat jagged, oscillating between 200 and 235. Let’s start with an alpha of 0.5. That gives us the following table:

Week (t)

Sales (Yt)

Forecast for This Period (Ŷt)

1

200

200.0

2

215

200.0

3

210

207.5

4

220

208.8

5

230

214.4

6

220

222.2

7

235

221.1

8

215

228.0

9

220

221.5

10

210

220.8

Notice how, even though your forecasts aren’t precise, when your actual value for a particular week is higher than what you forecasted (weeks 2 through 5, for example), your forecasts for each of the subsequent weeks (weeks 3 through 6) adjust upward; when your actual values are lower than your forecast (e.g., weeks 6, 8, 9, and 10), your forecasts for the following week adjusts downward. Also notice that, as you move to later periods, your earlier forecasts play less and less of a role in your later forecasts, as their weight diminishes exponentially. Just by looking at the table above, you know that the forecast for week 11 will be lower than 220.8, your forecast for week 10:

Ŷ11 = 0.5Y10 + (1-0.5) Ŷ10

= 0.5(210) + 0.5(220.8)

= 105 + 110.4

=215.4

So, based on our alpha and our past sales, our best guess is that sales in week 11 will be 215.4. Take a look at the graph of actual vs. forecasted sales for weeks 1-10:

Notice that the forecasted sales are smoother than actual, and you can see how the forecasted sales line adjusts to spikes and dips in the actual sales time series.

What if we Had Used a Smaller or Larger Alpha?

We’ll demonstrate by using both an alpha of .30 and one of .70. That gives us the following table and graph:

Week (t)

Sales (Yt)

Forecast α=0.50

Forecast α=0.30

Forecast α=0.70

1

200

200.0

200.0

200.0

2

215

200.0

200.0

200.0

3

210

207.5

204.5

210.5

4

220

208.8

206.2

210.2

5

230

214.4

210.3

217.0

6

220

222.2

216.2

226.1

7

235

221.1

217.3

221.8

8

215

228.0

222.6

231.1

9

220

221.5

220.4

219.8

10

210

220.8

220.2

219.9

 

As you can see, the smaller the α, the smoother the curve for forecasted sales; the larger the α, the bumpier the curve, as you can see as you move from .30 to .50 to .70. Notice how much faster an α of .70 adjusts to the actual sales than the smaller α’s. The forecasts for week 11 would be 217.2 with an α=.30 and 213 with an α=.70.

Which α is best?

As with moving average models, the Mean Absolute Deviation (MAD) can be used to determining which alpha best fits the data. The MADs for each alpha are computed below:

Week

Absolute Deviations

α=.30

α=.50

α=.70

1

2

15.0

15.0

15.0

3

5.5

2.5

0.5

4

13.9

11.3

9.8

5

19.7

15.6

13.0

6

3.8

2.2

6.1

7

17.7

13.9

13.2

8

7.6

13.0

16.1

9

0.4

1.5

0.2

10

10.2

10.8

9.9

MAD=

9.4

8.6

8.4

 

Using an alpha of 0.70, we end up with the lowest MAD of the three constants. Keep in mind that judging the dependability of forecasts isn’t always about minimizing MAD. MAD, after all, is an average of deviations. Notice how dramatically the absolute deviations for each of the alphas change from week to week. Forecasts might be more reliable using an alpha that produces a higher MAD, but has less variance among its individual deviations.

Limits on Exponential Smoothing

Exponential smoothing is not intended for long-term forecasting. Usually it is used to predict one or two, but rarely more than three periods ahead. Also, if there is a sudden drastic change in the level of sales or values, and the time series continues at that new level, then the algorithm will be slow to catch up with the sudden change. Hence, there will be greater forecasting error. In situations like that, it would be best to ignore the previous periods before the change, and begin the exponential smoothing process with the new level. Finally, this post discussed single exponential smoothing, which is used when there is no noticeable seasonality or trend in the data. When there is a noticeable trend or seasonal pattern in the data, single exponential smoothing will yield significant forecast error. Double exponential smoothing is needed here to adjust for those patterns. We will cover double exponential smoothing in next week’s Forecast Friday post.

Still don’t know why our Forecast Friday posts appear on Thursday? Find out at: http://tinyurl.com/26cm6ma

 

 

 

A Series on Forecasting Sales for Your Business’ Success

April 22, 2010

Trying to predict what sales will be like next year, next month, next week, or even tomorrow is as much an art as it is a science.  Because the forecasting process is often difficult and tedious, and because there’s no guarantee that a forecast will be precise, some companies and businesses don’t even bother to do it, resigning themselves to what Hamlet would call “the slings and arrows of outrageous fortune,” that the business world can certainly hash out.

Yet forecasting, if done properly, can be a great tool to reduce uncertainty about the future, as well as risk: it can make it easier to manage staffing and inventory levels; decide when to tap credit lines, step up marketing and sales efforts, or  acquire new equipment; and/or determine whether a new product will be worth launching.  In short, forecasting is intended to facilitate better planning.

These next few posts will introduce you to the different kinds of forecasting methods available, how to determine which method is most appropriate for your project, what information you need to generate the forecast, how to generate it, and how to test it for effectiveness.

This post serves as a roadmap for what is to come on Insight Central.  For the next few posts, you can expect to see (in this order):

  1. A high-level discussion of the different categories of forecasting methods: time series, causal/econometric, judgmental, artificial intelligence, and other.
  2. A deeper discussion of the more popular forecasting methods: regression analysis, exponential smoothing, moving average, ARIMA (Box-Jenkins), as well as some of the judgmental methods like surveys, composite forecasts, scenario building, and simulation.  Each discussion will also include a section on how to know if it’s one you should use.
  3. A discussion about the data you use to forecast; and how to prepare it for forecasting.
  4. How to determine if your forecast model is valid, reliable, and good for predicting.
  5. Some (nontechnical) case studies in which Analysights applied forecasting methods, and the results.

The next several posts will give you the pointers you need in order to forecast your business’ sales more effectively, and I believe you’ll find them to be informative, interesting, and exciting, not to mention beneficial to your top and bottom lines.  Let the voyage begin!