Posts Tagged ‘residuals’

Forecast Friday Topic: Detecting Heteroscedasticity – Analytical Approaches

August 19, 2010

(Eighteenth in a series)

Last week, we discussed the violation of the homoscedasticity assumption of regression analysis: the assumption that the error terms have a constant variance. When the error terms do not exhibit a constant variance, they are said to be heteroscedastic. A model that exhibits heteroscedasticity produces parameter estimates that are not biased, but rather inefficient. Heteroscedasticity most often appears in cross-sectional data and is frequently caused by a wide range of possible values for one or more independent variables.

Last week, we showed you how to detect heteroscedasticity by visually inspecting the plot of the error terms against the independent variable. Today, we are going to discuss three simple, but very powerful, analytical approaches to detecting heteroscedasticity: the Goldfeld-Quandt test, the Breusch-Pagan test, and the Park test. These approaches are quite simple, but can be a bid tedious to employ.

Reviewing Our Model

Recall our model from last week. We were trying to determine the relationship between a census tract’s median family income (INCOME) and the ratio of the number of families who own their homes to the number of families who rent (OWNRATIO). Our hypothesis was that census tracts with higher median family incomes had a higher proportion of families who owned their homes. I snatched an example from my college econometrics textbook, which pulled INCOME and OWNRATIOs from 59 census tracts in Pierce County, Washington, which were compiled during the 1980 Census. We had the following data:

Housing Data 

Tract 

Income 

Ownratio 

601 

$24,909 

7.220 

602 

$11,875 

1.094 

603 

$19,308 

3.587 

604 

$20,375 

5.279 

605 

$20,132 

3.508 

606 

$15,351 

0.789 

607 

$14,821 

1.837 

608 

$18,816 

5.150 

609 

$19,179 

2.201 

609 

$21,434 

1.932 

610 

$15,075 

0.919 

611 

$15,634 

1.898 

612 

$12,307 

1.584 

613 

$10,063 

0.901 

614 

$5,090 

0.128 

615 

$8,110 

0.059 

616 

$4,399 

0.022 

616 

$5,411 

0.172 

617 

$9,541 

0.916 

618 

$13,095 

1.265 

619 

$11,638 

1.019 

620 

$12,711 

1.698 

621 

$12,839 

2.188 

623 

$15,202 

2.850 

624 

$15,932 

3.049 

625 

$14,178 

2.307 

626 

$12,244 

0.873 

627 

$10,391 

0.410 

628 

$13,934 

1.151 

629 

$14,201 

1.274 

630 

$15,784 

1.751 

631 

$18,917 

5.074 

632 

$17,431

4.272 

633 

$17,044 

3.868 

634 

$14,870 

2.009 

635 

$19,384 

2.256 

701 

$18,250 

2.471 

705 

$14,212 

3.019 

706 

$15,817 

2.154 

710 

$21,911 

5.190 

711 

$19,282 

4.579 

712 

$21,795 

3.717 

713 

$22,904 

3.720 

713 

$22,507 

6.127 

714 

$19,592 

4.468 

714 

$16,900 

2.110

718 

$12,818 

0.782 

718 

$9,849 

0.259 

719 

$16,931 

1.233 

719 

$23,545 

3.288 

720 

$9,198 

0.235 

721 

$22,190 

1.406 

721 

$19,646 

2.206 

724 

$24,750 

5.650 

726 

$18,140 

5.078 

728 

$21,250 

1.433 

731 

$22,231 

7.452 

731 

$19,788 

5.738 

735 

$13,269 

1.364 

Data taken from U.S. Bureau of Census 1980 Pierce County, WA; Reprinted in Brown, W.S., Introducing Econometrics, St. Paul (1991): 198-200.

And we got the following regression equation:

Ŷ= 0.000297*Income – 2.221

With an R2=0.597, an F-ratio of 84.31, the t-ratios for INCOME (9.182) and the intercept (-4.094) both solidly significant, and the positive sign on the parameter estimate for INCOME, our model appeared to do very well. However, visual inspection of the regression residuals suggested the presence of heteroscedasticity. Unfortunately, visual inspection can only suggest; we need more objective ways of determining the presence of heteroscedasticity. Hence our three tests below.

The Goldfeld-Quandt Test

The Goldfeld-Quandt test is a computationally simple, and perhaps the most commonly used, method for detecting heteroscedasticity. Since a model with heteroscedastic error terms does not have a constant variance, the Goldfeld-Quandt test postulates that the variances associated with high values of the independent variable, X, are statistically significant from those associated with low values. Essentially, you would run separate regression analyses for the low values of X and the high values, and then compare their F-ratios.

The Goldfeld-Quandt test has four steps:

Step #1: Sort the data

Take the independent variable you suspect to be the source of the heteroscedasticity and sort your data set by the X value in low-to-high order:

Housing Data 

Tract 

Income 

Ownratio 

616 

$4,399 

0.022 

614 

$5,090 

0.128 

616 

$5,411 

0.172 

615 

$8,110 

0.059 

720 

$9,198 

0.235 

617 

$9,541 

0.916 

718 

$9,849 

0.259 

613 

$10,063 

0.901 

627 

$10,391 

0.410 

619 

$11,638 

1.019 

602 

$11,875 

1.094 

626 

$12,244 

0.873 

612 

$12,307

1.584 

620 

$12,711 

1.698 

718 

$12,818 

0.782 

621 

$12,839 

2.188 

618 

$13,095 

1.265 

735 

$13,269 

1.364 

628 

$13,934 

1.151 

625 

$14,178 

2.307 

629 

$14,201 

1.274 

705 

$14,212 

3.019 

607 

$14,821 

1.837 

634 

$14,870 

2.009 

610 

$15,075 

0.919 

623 

$15,202 

2.850 

606

$15,351 

0.789 

611 

$15,634 

1.898 

630 

$15,784 

1.751 

706 

$15,817 

2.154 

624 

$15,932 

3.049 

714 

$16,900 

2.110 

719 

$16,931 

1.233 

633 

$17,044 

3.868 

632 

$17,431 

4.272 

726 

$18,140 

5.078 

701 

$18,250 

2.471 

608 

$18,816 

5.150 

631 

$18,917 

5.074 

609 

$19,179

2.201 

711 

$19,282 

4.579 

603 

$19,308 

3.587 

635 

$19,384 

2.256 

714 

$19,592 

4.468 

721 

$19,646 

2.206 

731 

$19,788 

5.738 

605 

$20,132 

3.508 

604 

$20,375 

5.279 

728 

$21,250 

1.433 

609 

$21,434 

1.932 

712 

$21,795 

3.717 

710 

$21,911 

5.190 

721 

$22,190 

1.406 

731 

$22,231 

7.452 

713 

$22,507 

6.127 

713 

$22,904 

3.720 

719 

$23,545 

3.288 

724 

$24,750 

5.650 

601 

$24,909 

7.220 

Step #2: Omit the middle observations

Next, take out the observations in the middle. This usually amounts between one-fifth to one-third of your observations. There’s no hard and fast rule about how many variables to omit, and if your data set is small, you may not be able to omit any. In our example, we can omit 13 observations (highlighted in orange):

Housing Data 

Tract 

Income 

Ownratio 

616

$4,399 

0.022 

614 

$5,090 

0.128 

616 

$5,411 

0.172 

615 

$8,110 

0.059 

720 

$9,198 

0.235 

617 

$9,541 

0.916 

718 

$9,849 

0.259 

613 

$10,063 

0.901 

627 

$10,391 

0.410 

619 

$11,638 

1.019 

602 

$11,875 

1.094 

626 

$12,244 

0.873 

612 

$12,307 

1.584 

620 

$12,711 

1.698 

718 

$12,818 

0.782 

621 

$12,839 

2.188 

618 

$13,095 

1.265 

735 

$13,269 

1.364 

628 

$13,934 

1.151 

625 

$14,178 

2.307 

629 

$14,201 

1.274 

705 

$14,212 

3.019 

607 

$14,821 

1.837 

634 

$14,870 

2.009 

610 

$15,075 

0.919 

623 

$15,202 

2.850 

606 

$15,351 

0.789 

611 

$15,634

1.898 

630 

$15,784 

1.751 

706 

$15,817 

2.154 

624 

$15,932 

3.049 

714 

$16,900 

2.110 

719 

$16,931 

1.233 

633 

$17,044 

3.868 

632 

$17,431 

4.272 

726 

$18,140 

5.078 

Tract 

Income 

Ownratio 

701 

$18,250 

2.471 

608 

$18,816 

5.150 

631 

$18,917 

5.074 

609 

$19,179 

2.201

711 

$19,282 

4.579 

603 

$19,308 

3.587 

635 

$19,384 

2.256 

714 

$19,592 

4.468 

721 

$19,646 

2.206 

731 

$19,788 

5.738 

605 

$20,132 

3.508 

604 

$20,375 

5.279 

728 

$21,250 

1.433 

609 

$21,434 

1.932 

712 

$21,795 

3.717 

710 

$21,911 

5.190 

721 

$22,190 

1.406 

731

$22,231 

7.452 

713 

$22,507 

6.127 

713 

$22,904 

3.720 

719 

$23,545 

3.288 

724 

$24,750 

5.650 

601 

$24,909 

7.220 

 

Step #3: Run two separate regressions, one for the low values, one for the high

We ran separate regressions for the 23 observations with the lowest values for INCOME and the 23 observations with the highest values. In these regressions, we weren’t concerned with whether the t-ratios of the parameter estimates were significant. Rather, we wanted to look at their Error Sum of Squares (ESS). Each model has 21 degrees of freedom.

Step #4: Divide the ESS of the higher value regression by the ESS of the lower value regression, and compare quotient to the F-table.

The higher value regression produced an ESS of 61.489 and the lower value regression produced an ESS of 5.189. Dividing the former by the latter, we get a quotient of 11.851. Now, we need to go to the F-table and check the critical F-value for a 95% significance level and 21 degrees of freedom, which is a value of 2.10. Since our quotient of 11.851 is greater than that of the critical F-value, we can conclude there is strong evidence of heteroscedasticity in the model.

The Breusch-Pagan Test

The Breusch-Pagan test is also pretty simple, but it’s a very powerful test, in that it can be used to detect whether more than one independent variable is causing the heteroscedasticity. Since it can involve multiple variables, the Breusch-Pagan test relies on critical values of chi-squared (χ2) to determine the presence of heteroscedasticity, and works best with large sample sets. There are five steps to the Breusch-Pagan test:

Step #1:
Run the regular regression model and collect the residuals

We already did that.

Step #2: Estimate the variance of the regression residuals

To do this, we square each residual, sum it up and then divide it by the number of observations. Our formula is:

Our residuals and their squares are as follows:

Observation 

Predicted Ownratio 

Residuals 

Residuals Squared 

1 

5.165  

2.055  

4.222 

2 

1.300  

(0.206) 

0.043 

3 

3.504  

0.083  

0.007 

4 

3.821  

1.458  

2.126 

5 

3.749  

(0.241) 

0.058 

6 

2.331  

(1.542) 

2.378 

7 

2.174

(0.337) 

0.113 

8 

3.358  

1.792  

3.209 

9 

3.466  

(1.265) 

1.601 

10 

4.135  

(2.203) 

4.852 

11 

2.249  

(1.330) 

1.769 

12 

2.415  

(0.517) 

0.267 

13 

1.428  

0.156

0.024 

14 

0.763  

0.138  

0.019 

15 

(0.712) 

0.840  

0.705 

16 

0.184  

(0.125) 

0.016 

17 

(0.917) 

0.939  

0.881 

18 

(0.617) 

0.789  

0.622 

19 

0.608  

0.308  

0.095 

20

1.662  

(0.397) 

0.158 

21 

1.230  

(0.211) 

0.045 

22 

1.548  

0.150  

0.022 

23 

1.586  

0.602  

0.362 

24 

2.287  

0.563  

0.317 

25 

2.503  

0.546  

0.298 

26 

1.983  

0.324  

0.105 

27 

1.410  

(0.537) 

0.288 

28 

0.860  

(0.450) 

0.203 

29 

1.911  

(0.760) 

0.577 

30 

1.990  

(0.716) 

0.513 

31 

2.459  

(0.708) 

0.502 

32 

3.388  

1.686  

2.841 

33 

2.948  

1.324  

1.754 

34 

2.833  

1.035  

1.071 

35 

2.188  

(0.179) 

0.032 

36 

3.527  

(1.271) 

1.615 

37 

3.191  

(0.720) 

0.518 

38 

1.993  

1.026

1.052 

39 

2.469  

(0.315) 

0.099 

40 

4.276  

0.914  

0.835 

41 

3.497  

1.082  

1.171 

42 

4.242  

(0.525) 

0.275 

43 

4.571  

(0.851) 

0.724 

44 

4.453  

1.674  

2.802 

45

3.589  

0.879  

0.773 

46 

2.790  

(0.680) 

0.463 

47 

1.580  

(0.798) 

0.637 

48 

0.699  

(0.440) 

0.194 

49 

2.800  

(1.567) 

2.454 

50 

4.761  

(1.473) 

2.169 

51 

0.506

(0.271) 

0.074 

52 

4.359  

(2.953) 

8.720 

53 

3.605  

(1.399) 

1.956 

54 

5.118  

0.532  

0.283 

55 

3.158  

1.920  

3.686 

56 

4.080  

(2.647) 

7.008 

57 

4.371  

3.081  

9.492 

58 

3.647  

2.091  

4.373 

59 

1.714  

(0.350) 

0.122 

Summing the last column, we get 83.591. We divide this by 59, and get 1.417.

Step #3: Compute the square of the standardized residuals

Now that we know the variance of the regression residuals – 1.417 – we compute the standardized residuals by dividing each residual by 1.417 and then squaring the results, so that we get our square of standardized residuals, si2:

Obs. 

Predicted Ownratio 

Residuals 

Standardized Residuals

Square of Standardized Residuals 

1 

5.165  

2.055  

1.450  

2.103  

2 

1.300  

(0.206) 

(0.146) 

0.021  

3 

3.504  

0.083  

0.058

0.003  

4 

3.821  

1.458  

1.029  

1.059  

5 

3.749  

(0.241) 

(0.170) 

0.029  

6 

2.331  

(1.542) 

(1.088) 

1.185  

7 

2.174  

(0.337) 

(0.238) 

0.057  

8 

3.358  

1.792  

1.264  

1.599  

9 

3.466  

(1.265) 

(0.893) 

0.797  

10 

4.135  

(2.203) 

(1.555) 

2.417  

11 

2.249  

(1.330) 

(0.939) 

0.881  

12 

2.415  

(0.517) 

(0.365) 

0.133  

13 

1.428  

0.156  

0.110  

0.012  

14 

0.763  

0.138  

0.097  

0.009  

15 

(0.712) 

0.840  

0.593  

0.351  

16 

0.184  

(0.125)

(0.088) 

0.008  

17 

(0.917) 

0.939  

0.662  

0.439  

18 

(0.617) 

0.789  

0.557  

0.310  

19 

0.608  

0.308  

0.217  

0.047  

20 

1.662  

(0.397) 

(0.280) 

0.079  

21 

1.230  

(0.211) 

(0.149) 

0.022  

22 

1.548  

0.150  

0.106

0.011  

23 

1.586  

0.602  

0.425  

0.180  

24 

2.287  

0.563  

0.397  

0.158  

25 

2.503  

0.546  

0.385  

0.148  

26 

1.983  

0.324  

0.229  

0.052  

27 

1.410  

(0.537) 

(0.379) 

0.143  

28 

0.860  

(0.450) 

(0.318) 

0.101

29 

1.911  

(0.760) 

(0.536) 

0.288  

30 

1.990  

(0.716) 

(0.505) 

0.255  

31 

2.459  

(0.708) 

(0.500) 

0.250  

32 

3.388  

1.686  

1.190  

1.415  

33 

2.948  

1.324  

0.935  

0.874  

34 

2.833  

1.035  

0.730  

0.534  

35 

2.188

(0.179) 

(0.127) 

0.016  

36 

3.527  

(1.271) 

(0.897) 

0.805  

37 

3.191  

(0.720) 

(0.508) 

0.258  

38 

1.993  

1.026

0.724  

0.524  

39 

2.469  

(0.315) 

(0.222) 

0.049  

40 

4.276  

0.914  

0.645  

0.416  

41 

3.497  

1.082  

0.764  

0.584  

42 

4.242  

(0.525) 

(0.370) 

0.137  

43 

4.571  

(0.851) 

(0.600) 

0.361  

44 

4.453  

1.674  

1.182

1.396  

45 

3.589  

0.879  

0.621  

0.385  

46 

2.790  

(0.680) 

(0.480) 

0.231  

47 

1.580  

(0.798) 

(0.563) 

0.317  

48 

0.699  

(0.440) 

(0.311) 

0.097  

49 

2.800  

(1.567) 

(1.106) 

1.223  

50 

4.761  

(1.473) 

(1.040) 

1.081  

51

0.506  

(0.271) 

(0.192) 

0.037  

52 

4.359  

(2.953) 

(2.084) 

4.344  

53 

3.605  

(1.399) 

(0.987) 

0.974  

54 

5.118

0.532  

0.375  

0.141  

55 

3.158  

1.920  

1.355  

1.836  

56 

4.080  

(2.647) 

(1.868) 

3.491  

57 

4.371  

3.081  

2.175  

4.728  

58 

3.647  

2.091  

1.476  

2.179  

59 

1.714  

(0.350) 

(0.247) 

0.061  

 

Step #4: Run another regression with all your independent variables using the sum of standardized residuals as the dependent variable

In this case, we had only one independent variable, INCOME. We will now run a regression substituting the last column of the table above for OWNRATIO, and making it the dependent variable. Again, we’re not interested in the parameter estimates. We are, however, interested in the regression sum of squares (RSS), which is 15.493.

Step #5: Divide the RSS by 2 and compare with the χ2 table’s critical value for the appropriate degrees of freedom

Dividing the RSS by 2, we get 7.747. We look up the critical χ2 value for one degree of freedom and in the table, for a 5% significance level, we get 3.84. Since our χ2 value exceeds our critical, we can conclude there is strong evidence of heteroscedasticity present.

The Park Test

Last, but certainly not least comes the Park test. I saved this one for last because it is the simplest of the three methods and unlike the other two, provides information that can help eliminate the heteroscedasticity. The Park Test assumes there is a relationship between the error variance and one of the regression model’s independent variables. The steps involved are as follows:

Step #1: Run your original regression model and collect the residuals

Done.

Step #2: Square the regression residuals and compute the logs of the squared residuals and the values of the suspected independent variable.

We’ll square the regression residuals, and take their natural log. We will also take the natural log of INCOME:

Tract

Residual Squared

LnResidual Squared

LnIncome

601

4.222

1.440

10.123

602

0.043

(3.157)

9.382

603

0.007

(4.987)

9.868

604

2.126

0.754

9.922

605

0.058

(2.848)

9.910

606

2.378

0.866

9.639

607

0.113

(2.176)

9.604

608

3.209

1.166

9.842

609

1.601

0.470

9.862

609

4.852

1.579

9.973

610

1.769

0.571

9.621

611

0.267

(1.320)

9.657

612

0.024

(3.720)

9.418

613

0.019

(3.960)

9.217

614

0.705

(0.349)

8.535

615

0.016

(4.162)

9.001

616

0.881

(0.127)

8.389

616

0.622

(0.475)

8.596

617

0.095

(2.356)

9.163

618

0.158

(1.847)

9.480

619

0.045

(3.112)

9.362

620

0.022

(3.796)

9.450

621

0.362

(1.015)

9.460

623

0.317

(1.148)

9.629

624

0.298

(1.211)

9.676

625

0.105

(2.255)

9.559

626

0.288

(1.245)

9.413

627

0.203

(1.596)

9.249

628

0.577

(0.549)

9.542

629

0.513

(0.668)

9.561

630

0.502

(0.689)

9.667

631

2.841

1.044

9.848

632

1.754

0.562

9.766

633

1.071

0.069

9.744

634

0.032

(3.437)

9.607

635

1.615

0.479

9.872

701

0.518

(0.658)

9.812

705

1.052

0.051

9.562

706

0.099

(2.309)

9.669

710

0.835

(0.180)

9.995

711

1.171

0.158

9.867

712

0.275

(1.289)

9.989

713

0.724

(0.323)

10.039

713

2.802

1.030

10.022

714

0.773

(0.257)

9.883

714

0.463

(0.770)

9.735

718

0.637

(0.452)

9.459

718

0.194

(1.640)

9.195

719

2.454

0.898

9.737

719

2.169

0.774

10.067

720

0.074

(2.608)

9.127

721

8.720

2.166

10.007

721

1.956

0.671

9.886

724

0.283

(1.263)

10.117

726

3.686

1.305

9.806

728

7.008

1.947

9.964

731

9.492

2.250

10.009

731

4.373

1.476

9.893

735

0.122

(2.102)

9.493

 

Step #3: Run the regression equation using the log of the squared residuals as the dependent variable and the log of the suspected independent variable as the dependent variable

That results in the following regression equation:

Ln(e2) = 1.957(LnIncome) – 19.592

Step #4: If the t-ratio for the transformed independent variable is significant, you can conclude heteroscedasticity is present.

The parameter estimate for the LnIncome is significant, with a t-ratio of 3.499, so we conclude heteroscedasticity.

Next Forecast Friday Topic: Correcting Heteroscedasticity

Thanks for your patience! Now you know the three most common methods for detecting heteroscedasticity: the Goldfeld-Quandt test, the Breusch-Pagan test, and the Park test. As you will see in next week’s Forecast Friday post, the Park test will be beneficial in helping us eliminate the heteroscedasticity. We will discuss the most common approach to correcting heteroscedasticity: weighted least squares (WLS) regression, and show you how to apply it. Next week’s Forecast Friday post will conclude our discussion of regression violations, and allow us to resume discussions of more practical applications in forecasting.

*************************

Help us Reach 200 Fans on Facebook by Tomorrow!

Thanks to all of you, Analysights now has over 160 fans on Facebook! Can you help us get up to 200 fans by tomorrow? If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! And if you like us that much, please also pass these posts on to your friends who like forecasting and invite them to “Like” Analysights! By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when new information comes out. Check out our Facebook page! You can also follow us on Twitter. Thanks for your help!

Advertisements

Forecast Friday Topic: Heteroscedasticity

August 12, 2010

(Seventeenth in a series)

Recall that one of the important assumptions in regression analysis is that a regression equation exhibit homoscedasticity: the condition that the error terms have a constant variance. Today we discuss heteroscedasticity, the violation of that assumption.

Heteroscedasticity, like autocorrelation and multicollinearity, results in inefficient parameter estimates. The standard errors of the parameter estimates tend to be biased, which means that the t-ratios and confidence intervals calculated around the suspect independent variable will not be valid, and will generate dubious predictions.

Heteroscedasticity occurs mostly in cross-sectional, as opposed to time series, data and mostly in large data sets. When data sets are large, the range of values for an independent variable can be quite wide. This is especially the case in data where income or other measures of wealth are used as independent variables. Persons with low income have few options about how to spend their money while persons with high incomes have many options. If you were trying to predict that the conviction rate for crimes was different in low income counties vs. high income counties, your model may exhibit heteroscedasticity because a low-income person may not have the funds for an adequate defense, and may be restricted to a public defender, or other inexpensive attorney. A wealthy individual, on the other hand, can hire the very best defense lawyer money could buy; or he could choose an inexpensive lawyer, or even the public defender. The wealthy individual may even be able to make restitution in lieu of a conviction.

How does this disparity affect your model? Recall from our earlier discussions on regression analysis that the least-squares method places more weight on extreme values. When outliers exist in data, they generate large residuals that get scattered out from those of the remaining observations. While heteroscedastic error terms will still have a mean of zero, their variance is greatly out of whack, resulting in inefficient parameter estimates.

In today’s Forecast Friday post, we will look at a data set for a regional housing market, perform a regression, and show how to detect heteroscedasticity visually.

Heteroscedasticity in the Housing Market

The best depiction of heteroscedasticity comes from my college econometrics textbook, Introducing Econometrics, by William S. Brown. In the chapter on heteroscedasticity, Brown provides a data set of housing statistics from the 1980 Census for Pierce County, Washington, which I am going to use for our model. The housing market is certainly one market where heteroscedasticity is deeply entrenched, since there is a dramatic range for both incomes and home market values. In our data set, we have 59 census tracts within Pierce County. Our independent variable is the median family income for the census tract; our dependent variable is the OwnRatio – the ratio of the number of families who own their homes to the number of families who rent. Our data set is as follows:

Housing Data

Tract

Income

Ownratio

601

$24,909

7.220

602

$11,875

1.094

603

$19,308

3.587

604

$20,375

5.279

605

$20,132

3.508

606

$15,351

0.789

607

$14,821

1.837

608

$18,816

5.150

609

$19,179

2.201

609

$21,434

1.932

610

$15,075

0.919

611

$15,634

1.898

612

$12,307

1.584

613

$10,063

0.901

614

$5,090

0.128

615

$8,110

0.059

616

$4,399

0.022

616

$5,411

0.172

617

$9,541

0.916

618

$13,095

1.265

619

$11,638

1.019

620

$12,711

1.698

621

$12,839

2.188

623

$15,202

2.850

624

$15,932

3.049

625

$14,178

2.307

626

$12,244

0.873

627

$10,391

0.410

628

$13,934

1.151

629

$14,201

1.274

630

$15,784

1.751

631

$18,917

5.074

632

$17,431

4.272

633

$17,044

3.868

634

$14,870

2.009

635

$19,384

2.256

701

$18,250

2.471

705

$14,212

3.019

706

$15,817

2.154

710

$21,911

5.190

711

$19,282

4.579

712

$21,795

3.717

713

$22,904

3.720

713

$22,507

6.127

714

$19,592

4.468

714

$16,900

2.110

718

$12,818

0.782

718

$9,849

0.259

719

$16,931

1.233

719

$23,545

3.288

720

$9,198

0.235

721

$22,190

1.406

721

$19,646

2.206

724

$24,750

5.650

726

$18,140

5.078

728

$21,250

1.433

731

$22,231

7.452

731

$19,788

5.738

735

$13,269

1.364

Data taken from U.S. Bureau of Census 1980 Pierce County, WA; Reprinted in Brown, W.S., Introducing Econometrics, St. Paul (1991): 198-200.

When we run our regression, we get the following equation:

Ŷ= 0.000297*Income – 2.221

Both the intercept and independent variable’s parameter estimates are significant, with the intercept parameter having a t-ratio of -4.094 and the income estimate having one of 9.182. R2 is 0.597, and the F-statistic is a strong 84.31. The model seems to be pretty good – strong t-ratios and F-statistic, a high coefficient of determination, and the sign on the parameter estimate for Income is positive, as we would expect. Generally, the higher the income, the greater the Own-to-rent ratio. So far so good.

The problem comes when we do a visual inspection of our data: first the independent variable against the dependent variable and the independent variable against the regression residuals. First, let’s take a look at the scatter plot of Income and OwnRatio:

Without even looking at the residuals, we can see that as median family income increases, the data points begin to spread out. Look at what happens to the distance between data points above and below the line when median family incomes reach $20,000: OwnRatios vary drastically.

Now let’s plot Income against the regression’s residuals:

This scatter plot shows essentially the same phenomenon as the previous graph, but from a different perspective. We can clearly see the error terms fanning out as Income increases. In fact, we can see the residuals diverging at increasing rates once Income starts moving from $10,000 to $15,000, and just compounding as incomes go higher. Roughly half the residuals fall on both the positive and the negative side, allowing us to meet the regression assumption of our residuals having a mean of zero, hence our parameter estimates are not biased. However, because we violated the constant variance assumption, the standard error of our regression is biased, so our parameter estimates are suspect.

Visual Inspection Only Gets You So Far

By visually inspecting our residuals, we can clearly see that our error terms are not homoscedastic. When you have a regression model, especially for cross-sectional data sets like this, you should visually inspect every independent variable against the dependent variable and against the error terms in order to get a priori indication of heteroscedasticity. However, visual inspection alone is not a guarantee that heteroscedasticity exists. There are three particularly simple methods to detecting heteroscedasticity which we will discuss in next week’s Forecast Friday post: the Park Test, the Goldfeld-Quandt Test, and the Breusch-Pagan Test.

*************************

Help us Reach 200 Fans on Facebook by Tomorrow!

Thanks to all of you, Analysights now has 150 fans on Facebook! Can you help us get up to 200 fans by tomorrow? If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! And if you like us that much, please also pass these posts on to your friends who like forecasting and invite them to “Like” Analysights!  By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when new information comes out. Check out our Facebook page! You can also follow us on Twitter.   Thanks for your help!

Forecast Friday Topic: Correcting Autocorrelation

August 5, 2010

(Sixteenth in a series)

Last week, we discussed how to detect autocorrelation – the violation of the regression assumption that the error terms are not correlated with one another – in your forecasting model. Models exhibiting autocorrelation have parameter estimates that are inefficient, and R2s and t-ratios that seem overly inflated. As a result, your model generates forecasts that are too good to be true and has a tendency to miss turning points in your time series. In last week’s Forecast Friday post, we showed you how to diagnose autocorrelation: examining the model’s parameter estimates, visually inspecting the data, and computing the Durbin-Watson statistic. Today, we’re going to discuss how to correct it.

Revisiting our Data Set

Recall our data set: average hourly wages of textile and apparel workers for the 18 months from January 1986 through June 1987, as reported in the Survey of Current Business (September issues from 1986 and 1987), and reprinted in Data Analysis Using Microsoft ® Excel, by Michael R. Middleton, page 219:

Month

t

Wage

Jan-86

1

5.82

Feb-86

2

5.79

Mar-86

3

5.8

Apr-86

4

5.81

May-86

5

5.78

Jun-86

6

5.79

Jul-86

7

5.79

Aug-86

8

5.83

Sep-86

9

5.91

Oct-86

10

5.87

Nov-86

11

5.87

Dec-86

12

5.9

Jan-87

13

5.94

Feb-87

14

5.93

Mar-87

15

5.93

Apr-87

16

5.94

May-87

17

5.89

Jun-87

18

5.91

We generated the following regression model:

Ŷ = 5.7709 + 0.0095t

Our model had an R2 of .728, and t-ratios of about 368 for the intercept term and 6.55 for the parameter estimate, t. The Durbin-Watson statistic was 1.05, indicating positive autocorrelation. How do we correct for autocorrelation?

Lagging the Dependent Variable

One of the most common remedies for autocorrelation is to lag the dependent variable one or more periods and then make the lagged dependent variable the independent variable. So, in our data set above, you would take the first value of the dependent variable, $5.82, and make it the independent variable for period 2, with $5.79 being the dependent variable; in like manner, $5.79 will also become the independent variable for the next period, whose dependent variable has a value of $5.80, and so on. Since the error terms from one period to another exhibit correlation, by using the previous value of the dependent variable to predict the next one, you reduce that correlation of errors.

You can lag for as many periods as you need to; however, note that you lose the first observation when you lag one period (unless you know the previous period before the start of the data set, you have nothing to predict the first observation). You’ll lose two observations if you lag two periods, and so on. If you have a very small data set, the loss of degrees of freedom can lead to Type II error – failing to identify a parameter estimate as significant, when in fact it is. So, you must be careful here.

In this case, by lagging our data by one period, we have the following data set:

Month

Wage

Lag1 Wage

Feb-86

$5.79

$5.82

Mar-86

$5.80

$5.79

Apr-86

$5.81

$5.80

May-86

$5.78

$5.81

Jun-86

$5.79

$5.78

Jul-86

$5.79

$5.79

Aug-86

$5.83

$5.79

Sep-86

$5.91

$5.83

Oct-86

$5.87

$5.91

Nov-86

$5.87

$5.87

Dec-86

$5.90

$5.87

Jan-87

$5.94

$5.90

Feb-87

$5.93

$5.94

Mar-87

$5.93

$5.93

Apr-87

$5.94

$5.93

May-87

$5.89

$5.94

Jun-87

$5.91

$5.89

 

So, we have created a new independent variable, Lag1_Wage. Notice that we are not going to regress time period t as an independent variable. This doesn’t mean that we should or shouldn’t; in this case, we’re only trying to demonstrate the effect of the lagging.

Rerunning the Regression

Now we do our regression analysis. We come up with the following equation:

Ŷ = 0.8253 + 0.8600*Lag1_Wage

Apparently, from this model, each $1 change in hourly wage from the previous month is associated with an average $0.86 change in hourly wages for the current month. The R2 for this model was virtually unchanged, 0.730. However, the Durbin-Watson statistic is now 2.01 – just about the total eradication of autocorrelation. Unfortunately, the intercept has a t-ratio of 1.04, indicating it is not significant. The parameter estimate for Lag1_Wage is about 6.37, not much different than the parameter estimate for t in our previous model. However, we did get rid of the autocorrelation.

The statistically insignificant intercept term resulting from this lagging is a result of the Type II error involved with the loss of a degree of freedom in a small sample size. Perhaps if we had several more months of data, we might have had a significant intercept estimate.

Other Approaches to Correcting Autocorrelation

There are other approaches to correcting autocorrelation. One other important way might be to identify important independent variables that have been omitted from the model. Perhaps if we had data on the average years work experience of the textile and apparel labor force from month to month, that might have increased our R2, and reduced correlations in the error term. Another thing we could do is difference the data. Differencing works like lagging, only we subtract the value of the dependent and independent variables of the first observation from their respective values in the second observation; then we subtract those of the second observation’s original values from those of the third, and so on. Then we run a regression on the differences in observations. The problem here is that again, your data set is reduced by one observation and your transformed model will not have an intercept term, which can cause issues in some studies.

Other approaches to correcting autocorrelation include quasi-differencing, the Cochran-Orcutt Procedure, the Hildreth-Lu Procedure, and the Durbin Two-Step Method. These methods are iterative, require a lot of tedious effort and are beyond the scope of our post. But many college-level forecasting textbooks have sections on these procedures if you’re interested in further reading on them.

Next Forecast Friday Topic: Detecting Heteroscedasticity

Next week, we’ll discuss the last of the regression violations, heteroscedasticity, which is the violation of the assumption that error terms have a constant variance. We will discuss why heteroscedasticity exists and how to diagnose it. The week after that, we’ll discuss remedying heteroscedasticity. Once we have completed our discussions on the regression violations, we will spend a couple of weeks discussing regression modeling techniques like transforming independent variables, using categorical variables, adjusting for seasonality, and other regression techniques. These topics will be far less theoretical and more practical in terms of forecasting.

Forecast Friday Topic: Detecting Autocorrelation

July 29, 2010

(Fifteenth in a series)

We have spent the last few Forecast Friday posts discussing violations of different assumptions in regression analysis. So far, we have discussed the effects of specification bias and multicollinearity on parameter estimates, and their corresponding effect on your forecasts. Today, we will discuss another violation, autocorrelation, which occurs when sequential residual (error) terms are correlated with one another.

When working with time series data, autocorrelation is the most common problem forecasters face. When the assumption of uncorrelated residuals is violated, we end up with models that have inefficient parameter estimates and upwardly-biased t-ratios and R2 values. These inflated values make our forecasting model appear better than it really is, and can cause our model to miss turning points. Hence, if you’re model is predicting an increase in sales and you, in actuality, see sales plunge, it may be due to autocorrelation.

What Does Autocorrelation Look Like?

Autocorrelation can take on two types: positive or negative. In positive autocorrelation, consecutive errors usually have the same sign: positive residuals are almost always followed by positive residuals, while negative residuals are almost always followed by negative residuals. In negative autocorrelation, consecutive errors typically have opposite signs: positive residuals are almost always followed by negative residuals and vice versa.

In addition, there are different orders of autocorrelation. The simplest, most common kind of autocorrelation, first-order autocorrelation, occurs when the consecutive errors are correlated. Second-order autocorrelation occurs when error terms two periods apart are correlated, and so forth. Here, we will concentrate solely on first-order autocorrelation.

You will see a visual depiction of positive autocorrelation later in this post.

What Causes Autocorrelation?

The two main culprits for autocorrelation are sluggishness in the business cycle (also known as inertia) and omitted variables from the model. At various turning points in a time series, inertia is very common. At the time when a time series turns upward (downward), its observations build (lose) momentum, and continue going up (down) until the series reaches its peak (trough). As a result, successive observations and the error terms associated with them depend on each other.

Another example of inertia happens when forecasting a time series where the same observations can be in multiple successive periods. For example, I once developed a model to forecast enrollment for a community college, and found autocorrelation to be present in my initial model. This happened because many of the students enrolled during the spring term were also enrolled in the previous fall term. As a result, I needed to correct for that.

The other main cause of autocorrelation is omitted variables from the model. When an important independent variable is omitted from a model, its effect on the dependent variable becomes part of the error term. Hence, if the omitted variable has a positive correlation with the dependent variable, it is likely to cause error terms that are positively correlated.

How Do We Detect Autocorrelation?

To illustrate how we go about detecting autocorrelation, let’s first start with a data set. I have pulled the average hourly wages of textile and apparel workers for the 18 months from January 1986 through June 1987. The original source was the Survey of Current Business, September issues from 1986 and 1987, but this data set was reprinted in Data Analysis Using Microsoft ® Excel, by Michael R. Middleton, page 219:

Month

t

Wage

Jan-86

1

5.82

Feb-86

2

5.79

Mar-86

3

5.8

Apr-86

4

5.81

May-86

5

5.78

Jun-86

6

5.79

Jul-86

7

5.79

Aug-86

8

5.83

Sep-86

9

5.91

Oct-86

10

5.87

Nov-86

11

5.87

Dec-86

12

5.9

Jan-87

13

5.94

Feb-87

14

5.93

Mar-87

15

5.93

Apr-87

16

5.94

May-87

17

5.89

Jun-87

18

5.91

Now, let’s run a simple regression model, using time period t as the independent variable and Wage as the dependent variable. Using the data set above, we derive the following model:

Ŷ = 5.7709 + 0.0095t

Examine the Model Output

Notice also the following model diagnostic statistics:

R2=

0.728

Variable

Coefficient

t-ratio

Intercept

5.7709

367.62

t

0.0095

6.55

 

You can see that the R2 is a high number, with changes in t explaining nearly three-quarters the variation in average hourly wage. Note also the t-ratios for both the intercept and the parameter estimate for t. Both are very high. Recall that a high R2 and high t-ratios are symptoms of autocorrelation.

Visually Inspect Residuals

Just because a model has a high R2 and parameters with high t-ratios doesn’t mean autocorrelation is present. More work must be done to detect autocorrelation. Another way to check for autocorrelation is to visually inspect the residuals. The best way to do this is through plotting the average hourly wage predicted by the model against the actual average hourly wage, as Middleton has done:

Notice the green line representing the Predicted Wage. It is a straight, upward line. This is to be expected, since the independent variable is sequential and shows an increasing trend. The red line depicts the actual wage in the time series. Notice that the model’s forecast is higher than actual for months 5 through 8, and for months 17 and 18. The model also underpredicts for months 12 through 16. This clearly illustrates the presence of positive, first-order autocorrelation.

The Durbin-Watson Statistic

Examining the model components and visually inspecting the residuals are intuitive, but not definitive ways to diagnose autocorrelation. To really be sure if autocorrelation exists, we must compute the Durbin-Watson statistic, often denoted as d.

In our June 24 Forecast Friday post, we demonstrated how to calculate the Durbin-Watson statistic. The actual formula is:

That is, beginning with the error term for the second observation, we subtract the immediate previous error term from it; then we square the difference. We do this for each observation from the second one onward. Then we sum all of those squared differences together. Next, we square the error terms for each observation, and sum those together. Then we divide the sum of squared differences by the sum of squared error terms, to get our Durbin-Watson statistic.

For our example, we have the following:

t

Error

Squared Error

et-et-1

Squared Difference

1

0.0396

0.0016

     

2

0.0001

0.0000

(0.0395) 0.0016

3

0.0006

0.0000

0.0005 0.0000

4

0.0011

0.0000

0.0005 0.0000

5

(0.0384)

0.0015

(0.0395) 0.0016

6

(0.0379)

0.0014

0.0005 0.0000

7

(0.0474)

0.0022

(0.0095) 0.0001

8

(0.0169)

0.0003

0.0305 0.0009

9

0.0536

0.0029

0.0705 0.0050

10

0.0041

0.0000

(0.0495) 0.0024

11

(0.0054)

0.0000

(0.0095) 0.0001

12

0.0152

0.0002

0.0205 0.0004

13

0.0457

0.0021

0.0305 0.0009

14

0.0262

0.0007

(0.0195) 0.0004

15

0.0167

0.0003

(0.0095) 0.0001

16

0.0172

0.0003

0.0005 0.0000

17

(0.0423)

0.0018

(0.0595) 0.0035

18

(0.0318)

0.0010

0.0105 0.0001
  

Sum:

0.0163

  

0.0171

 

To obtain our Durbin-Watson statistic, we plug our sums into the formula:

= 1.050

What Does the Durbin-Watson Statistic Tell Us?

Our Durbin-Watson statistic is 1.050. What does that mean? The Durbin-Watson statistic is interpreted as follows:

  • If d is close to zero (0), then positive autocorrelation is probably present;
  • If d is close to two (2), then the model is likely free of autocorrelation; and
  • If d is close to four (4), then negative autocorrelation is probably present.

As we saw from our visual examination of the residuals, we appear to have positive autocorrelation, and the fact that our Durbin-Watson statistic is about halfway between zero and two suggests the presence of positive autocorrelation.

Next Forecast Friday Topic: Correcting Autocorrelation

Today we went through the process of understanding the causes and effect of autocorrelation, and how to suspect and detect its presence. Next week, we will discuss how to correct for autocorrelation and eliminate it so that we can have more efficient parameter estimates.

*************************

If you Like Our Posts, Then “Like” Us on Facebook and Twitter!

Analysights is now doing the social media thing! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when other information comes out. Check out our Facebook page! You can also follow us on Twitter.

Forecast Friday Topic: Multicollinearity – Correcting and Accepting it

July 22, 2010

(Fourteenth in a series)

In last week’s Forecast Friday post, we discussed how to detect multicollinearity in a regression model and how dropping a suspect variable or variables from the model can be one approach to reducing or eliminating multicollinearity. However, removing variables can cause other problems – particularly specification bias – if the suspect variable is indeed an important predictor. Today we will discuss two additional approaches to correcting multicollinearity – obtaining more data and transforming variables – and will discuss when it’s best to just accept the multicollinearity.

Obtaining More Data

Multicollinearity is really an issue with the sample, not the population. Sometimes, sampling produces a data set that might be too homogeneous. One way to remedy this would be to add more observations to the data set. Enlarging the sample will introduce more variation in the data series, which reduces the effect of sampling error and helps increase precision when estimating various properties of the data. Increased sample sizes can reduce either the presence or the impact of multicollinearity, or both. Obtaining more data is often the best way to remedy multicollinearity.

Obtaining more data does have problems, however. Sometimes, additional data just isn’t available. This is especially the case with time series data, which can be limited or otherwise finite. If you need to obtain that additional information through great effort, it can be costly and time consuming. Also, the additional data you add to your sample could be quite similar to your original data set, so there would be no benefit to enlarging your data set. The new data could even make problems worse!

Transforming Variables

Another way statisticians and modelers go about eliminating multicollinearity is through data transformation. This can be done in a number of ways.

Combine Some Variables

The most obvious way would be to find a way to combine some of the variables. After all, multicollinearity suggests that two or more independent variables are strongly correlated. Perhaps you can multiply two variables together and use the product of those two variables in place of them.

So, in our example of the donor history, we had the two variables “Average Contribution in Last 12 Months” and “Times Donated in Last 12 Months.” We can multiply them to create a composite variable, “Total Contributions in Last 12 Months,” and then use that new variable, along with the variable “Months Since Last Donation” to perform the regression. In fact, if we did that with our model, we end up with a model (not shown here) that has an R2=0.895, and this time the coefficient for “Months Since Last Donation” is significant, as is our “Total Contribution” variable. Our F statistic is a little over 72. Essentially, the R2 and F statistics are only slightly lower than in our original model, suggesting that the transformation was useful. However, looking at the correlation matrix, we still see a strong negative correlation between our two independent variables, suggesting that we still haven’t eliminated multicollinearity.

Centered Interaction Terms

Sometimes we can reduce multicollinearity by creating an interaction term between variables in question. In a model trying to predict performance on a test based on hours spent studying and hours of sleep, you might find that hours spent studying appears to be related with hours of sleep. So, you create a third independent variable, Sleep_Study_Interaction. You do this by computing the average value for both the hours of sleep and hours of studying variables. For each observation, you subtract each independent variable’s mean from its respective value for that observation. Once you’ve done that for each observation, multiply their differences together. This is your interaction term, Sleep_Study_Interaction. Run the regression now with the original two variables and the interaction term. When you subtract the means from the variables in question, you are in effect centering interaction term, which means you’re taking into account central tendency in your data.

Differencing Data

If you’re working with time series data, one way to reduce multicollinearity is to run your regression using differences. To do this, you take every variable – dependent and independent – and, beginning with the second observation – subtract the immediate prior observation’s values for those variables from the current observation. Now, instead of working with original data, you are working with the change in data from one period to the next. Differencing eliminates multicollinearity by removing the trend component of the time series. If all independent variables had followed more or less the same trend, they could end up highly correlated. Sometimes, however, trends can build on themselves for several periods, so multiple differencing may be required. In this case, subtracting the period before was taking a “first difference.” If we subtracted two periods before, it’s a “second difference,” and so on. Note also that with differencing, we lose the first observations in the data, depending on how many periods we have to difference, so if you have a small data set, differencing can reduce your degrees of freedom and increase your risk of making a Type I Error: concluding that an independent variable is not statistically significant when, in truth it is.

Other Transformations

Sometimes, it makes sense to take a look at a scatter plot of each independent variable’s values with that of the dependent variable to see if the relationship is fairly linear. If it is not, that’s a cue to transform an independent variable. If an independent variable appears to have a logarithmic relationship, you might substitute its natural log. Also, depending on the relationship, you can use other transformations: square root, square, negative reciprocal, etc.

Another consideration: if you’re predicting the impact of violent crime on a city’s median family income, instead of using the number of violent crimes committed in the city, you might instead divide it by the city’s population and come up with a per-capita figure. That will give more useful insights into the incidence of crime in the city.

Transforming data in these ways helps reduce multicollinearity by representing independent variables differently, so that they are less correlated with other independent variables.

Limits of Data Transformation

Transforming data has its own pitfalls. First, transforming data also transforms the model. A model that uses a per-capita crime figure for an independent variable has a very different interpretation than one using an aggregate crime figure. Also, interpretations of models and their results get more complicated as data is transformed. Ideally, models are supposed to be parsimonious – that is, they explain a great deal about the relationship as simply as possible. Typically, parsimony means as few independent variables as possible, but it also means as few transformations as possible. You also need to do more work. If you try to plug in new data to your resulting model for forecasting, you must remember to take the values for your data and transform them accordingly.

Living With Multicollinearity

Multicollinearity is par for the course when a model consists of two or more independent variables, so often the question isn’t whether multicollinearity exists, but rather how severe it is. Multicollinearity doesn’t bias your parameter estimates, but it inflates their variance, making them inefficient or untrustworthy. As you have seen from the remedies offered in this post, the cures can be worse than the disease. Correcting multicollinearity can also be an iterative process; the benefit of reducing multicollinearity may not justify the time and resources required to do so. Sometimes, any effort to reduce multicollinearity is futile. Generally, for the purposes of forecasting, it might be perfectly OK to disregard the multicollinearity. If, however, you’re using regression analysis to explain relationships, then you must try to reduce the multicollinearity.

A good approach is to run a couple of different models, some using variations of the remedies we’ve discussed here, and comparing their degree of multicollinearity with that of the original model. It is also important to compare the forecast accuracy of each. After all, if all you’re trying to do is forecast, then a model with slightly less multicollinearity but a higher degree of forecast error is probably not preferable to a more precise forecasting model with higher degrees of multicollinearity.

The Takeaways:

  1. Where you have multiple regression, you almost always have multicollinearity, especially in time series data.
  2. A correlation matrix is a good way to detect multicollinearity. Multicollinearity can be very serious if the correlation matrix shows that some of the independent variables are more highly correlated with each other than they are with the dependent variable.
  3. You should suspect multicollinearity if:
    1. You have a high R2 but low t-statistics;
    2. The sign for a coefficient is opposite of what is normally expected (a relationship that should be positive is negative, and vice-versa).
  4. Multicollinearity doesn’t bias parameter estimates, but makes them untrustworthy by enlarging their variance.
  5. There are several ways of remedying multicollinearity, with obtaining more data often being the best approach. Each remedy for multicollinearity contributes a new set of problems and limitations, so you must weigh the benefit of reduced multicollinearity on time and resources needed to do so, and the resulting impact on your forecast accuracy.

Next Forecast Friday Topic: Autocorrelation

These past two weeks, we discussed the problem of multicollinearity. Next week, we will discuss the problem of autocorrelation – the phenomenon that occurs when we violate the assumption that the error terms are not correlated with each other. We will discuss how to detect autocorrelation, discuss in greater depth the Durbin-Watson statistic’s use as a measure of the presence of autocorrelation, and how to correct for autocorrelation.

*************************

If you Like Our Posts, Then “Like” Us on Facebook and Twitter!

Analysights is now doing the social media thing! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when other information comes out. Check out our Facebook page! You can also follow us on Twitter.