Archive for August, 2010

Insight Central Will Resume Week of Sept. 6, 2010

August 30, 2010

With this being the final week of summer, we figure many of our readers are planning to squeeze in any last minute vacations or “back to school” activities, so we’re going to take this week off from Insight Central.  We will resume our posts next week, and Forecast Friday will return next Thursday, September 9. 

We here at Analysights wish all of you a great final week of summer, and a safe, enjoyable Labor Day weekend.  Thanks for reading Insight Central!

Forecast Friday Topic: Correcting Heteroscedasticity

August 26, 2010

(Nineteenth in a series)

In last week’s Forecast Friday post, we discussed the three most commonly used analytical approaches to detecting heteroscedasticity: the Goldfeld-Quandt test, the Breusch-Pagan test, and the Park test. We continued to work with our data set of 59 census tracts in Pierce County, WA, from which we were trying to determine what, if any, influence the tract’s median family income had on the ratio of the number of families in the tract who own their home to the number of families who rent. As we saw, heteroscedasticity was present in our model, caused largely by the wide variation in income from one census tract to the other.

Recall that while INCOME, for the most part, had a positive relationship with the OWNRATIO, yet we found many census tracts that despite having high median family incomes had low OWNRATIOs. This is because unlike low-income families whose housing options are limited, high income families have several more housing options. The fact that the wealthier census tracts have more options increases the variability within the relationship between INCOME and OWNRATIO, causing us to generate errors that don’t have a constant variance and produce forecasts with parameter estimates that don’t seem to make sense.

Today, we turn our attention to correcting heteroscedasticity, and we will do that by transforming our model using Weighted Least Squares (WLS) regression. And we’ll show how our results from the Park test can enable us to approximate the weights to use in our WLS model.

Weighted Least Squares Regression

The reason wide variances in the value of one or more independent variables cause heteroscedastic errors is because the regression model places heavier weight on extreme values. By weighting each observation in the data set, we eliminate that tendency. But how do we know what weights to use? That depends on whether the variances of each individual observation are known or unknown.

If the variances are known, then you would simply divide each observation by its standard deviation and then run your regression to get a transformed model. Rarely, however, is the individual variance known, so we need to apply a more intricate approach.

Returning to our housing model, our regression equation was:

Ŷ= 0.000297*Income – 2.221

With an R2=0.597, an F-ratio of 84.31, and t-ratios of 9.182 for INCOME and -4.094 for the intercept.

We know that INCOME, our independent variable, is the source of the heteroscedasticity. Let’s also assume that the “correct” housing model is also of a linear functional form like our model above. In this case, we would divide each observation’s dependent variable (OWNRATIO) value by the value of its independent variable, INCOME, forming a new dependent variable (OwnRatio_Income) and then take the reciprocal of the INCOME value, and form a new independent variable, IncomeReciprocal.

Recalling the Park Test

How do we know to choose the reciprocal? Remember when we did the Park test last week? We got the following equation:

Ln(e2) = 1.957(LnIncome) – 19.592

The parameter estimate for LnIncome is 1.957. The Park test assumes that the variance of the heteroscedastic error is equal to the variance of the homoscedastic error times Xi raised to an exponent. That coefficient represents the exponent to which our independent variable Xi is raised. Since the Park test is performed by regressing a double log function, we divide that coefficient by two to arrive at the exponent of the Xi value by which to weight our observations:

Essentially, we are saying that:

Var(heterosc. errors in housing model) = var(homosc. errors in housing model)1.957

For simplicity’s sake, let’s round the coefficient from 1.957 to 2. Hence, we divide our dependent variable by Xi2/Xi = Xi , and our independent variable by its reciprocal:

Estimating the Housing Model Using WLS

We weight the values for each census tract’s housing data accordingly:

OwnRatio_Income

IncomeReciprocal

0.000290

0.000040

0.000092

0.000084

0.000186

0.000052

0.000259

0.000049

0.000174

0.000050

0.000051

0.000065

0.000124

0.000067

0.000274

0.000053

0.000115

0.000052

0.000090

0.000047

0.000061

0.000066

0.000121

0.000064

0.000129

0.000081

0.000090

0.000099

0.000025

0.000196

0.000007

0.000123

0.000005

0.000227

0.000032

0.000185

0.000096

0.000105

0.000097

0.000076

0.000088

0.000086

0.000134

0.000079

0.000170

0.000078

0.000187

0.000066

0.000191

0.000063

0.000163

0.000071

0.000071

0.000082

0.000039

0.000096

0.000083

0.000072

0.000090

0.000070

0.000111

0.000063

0.000268

0.000053

0.000245

0.000057

0.000227

0.000059

0.000135

0.000067

0.000116

0.000052

0.000135

0.000055

0.000212

0.000070

0.000136

0.000063

0.000237

0.000046

0.000237

0.000052

0.000171

0.000046

0.000162

0.000044

0.000272

0.000044

0.000228

0.000051

0.000125

0.000059

0.000061

0.000078

0.000026

0.000102

0.000073

0.000059

0.000140

0.000042

0.000026

0.000109

0.000063

0.000045

0.000112

0.000051

0.000228

0.000040

0.000280

0.000055

0.000067

0.000047

0.000335

0.000045

0.000290

0.000051

0.000103

0.000075

 

And we run a regression, to get a model of this form:

 

OwnRatio_Incomei = α* + β1*IncomeReciprocali + εi*

Notice the asterisks for each of the parameter estimates. They denote the transformed model. Performing our transformed regression, we get:



We get an R2 of .596 for the transformed model, not much different from that of our original model. However, notice the intercept of our transformed model and look at the coefficient of INCOME from our original model. Notice that they are almost equal. That’s because when you divided each observation by Xi , you essentially divided 0.000297*INCOME by INCOME, turning the slope into the intercept! Since heteroscedasticity doesn’t bias parameter estimates, we would expect the slope of our original model and the intercept of our transformed model to be equivalent. This is because those parameter estimates are averages. Heteroscedasticity doesn’t bias the average, but the variance.

Note the t-ratio for the intercept in our transformed model is much stronger than that of the coefficient for INCOME in our transformed model (12.19 vs. 9.182), suggesting that the transformed model has generated a more efficient estimate of the slope parameter. That’s because the standard error of the estimate (read VARIANCE) is smaller in our transformed model. We divide the parameter estimate by the standard error of the estimate to get our t-ratios. Because the standard error is smaller, our estimates are more trustworthy.

Recap

This concludes our discussions of all the violations that can occur with regression analysis and the problems these violations can cause. You now understand that omitting important independent variables, multicollinearity, autocorrelation, and heteroscedasticity can all cause you to generate models that produce unacceptable forecasts and prediction. You now know how to diagnose these violations and how to correct them. One thing you’ve probably also noticed as we went through these discussions is that data is never perfect. No matter how good our data is, we must still work with it and adapt it in a way that we can derive actionable insights from it.

Forecast Friday Will Resume Two Weeks From Today

Next week is the weekend before Labor Day, and I am forecasting that many of you will be leaving the office early for the long weekend, so I have decided to make the next edition of Forecast Friday for September 9. The other two posts that appear earlier in the week will continue as scheduled. Beginning with the September 9 Forecast Friday post, we will talk about additional regression analysis topics that are much less theoretical than these last few posts’ topics, and much more practical. Until then, Analysights wishes you and your family a great Labor Day weekend!

****************************************************

Help us Reach 200 Fans on Facebook by Tomorrow!
Thanks to all of you, Analysights now has over 160 fans on Facebook! Can you help us get up to 200 fans by tomorrow? If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! And if you like us that much, please also pass these posts on to your friends who like forecasting and invite them to “Like” Analysights! By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when new information comes out. Check out our Facebook page! You can also follow us on Twitter. Thanks for your help!

Former Customers Can Be Goldmine – Both in Marketing Research and Winback Sales

August 24, 2010

The other day, I stumbled across this May 28, 2010 blog post from MySmallBusinessMentor.com, which discussed how to re-activate former customers. While you should definitely reach out to former customers and try to get them to buy again, your former customers can also provide a wealth of information from a marketing research and process improvement standpoint.

If a customer has lapsed for, say a 90- or 180-day period, or a customer who used to buy once a month is now only buying every other month, reach out to that customer and mention that you noticed he/she isn’t frequenting your business as much, and ask if there’s anything with your company that they aren’t getting, or would like to see. It could be that they’re not happy with the product, or they found a similar, less expensive product from a competitor. Maybe they’ve “outgrown” your company’s products; or maybe they lost their job and can no longer afford it, whatever. You won’t know unless you ask.

For the purposes of marketing research, a lapsed customer can be more valuable than a loyal customer, especially when you consider that acquiring a new customer is six times more costly than retaining an existing customer. Taking the time to hear out a former customer can help you take corrective action to prevent other customer defections, improve your practices and product benefits, and even win back your lost customers.

*************************

Help us Reach 200 Fans on Facebook!

Thanks to all of you, Analysights now has more than 160 Facebook fans! We had hoped to get up to 200 fans by this past Friday, but weren’t so lucky. Can you help us out? If you like Forecast Friday – and our other posts – then we want you to “Like” us on Facebook! And if you like us that much, please also pass these posts on to your friends who like Insight Central and invite them to “Like” Analysights! By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when new information comes out. Check out our Facebook page! You can also follow us on Twitter. Thanks for your help!

Do-it Yourself Focus Groups

August 23, 2010

The unstructured nature of focus groups enables marketers and businesses to draw out ideas, perceptions, feelings, and experiences from prospective customers that might not be possible to extract through structured quantitative approaches like surveys. By using focus groups, businesses can come up with ideas for new products and services; lay the groundwork for surveys and advertising campaigns by understanding the vocabulary customers use when describing products and services; understand why customers feel the way they do and their needs; and understand the findings from quantitative research.

Focus groups can be very expensive, yet doing them without careful organization can be disastrous to your marketing efforts. Yesterday’s episode of Your Business, on MSNBC, had a segment on “Do-It Yourself Focus Groups.” The segment covered the following 10 topics/tips when doing your own focus groups:

  1. Why have a focus group?
  2. How do you get started?
  3. Who do you choose (to participate)?
  4. Choose current customers
  5. Choose former customers
  6. Choose employees
  7. Start with a “Trend Question”
  8. Go around the room
  9. Ask for a rating
  10. Follow-up is key.

Although Analysights doesn’t presently do focus groups, we thought we’d share this information with those of you who are interested. Here’s a link to the 3 ½ minute segment. Enjoy!

*************************

Help us Reach 200 Fans on Facebook!

Thanks to all of you, Analysights now has more than 160 Facebook fans! We had hoped to get up to 200 fans by this past Friday, but weren’t so lucky. Can you help us out? If you like Forecast Friday – and our other posts – then we want you to “Like” us on Facebook! And if you like us that much, please also pass these posts on to your friends who like Insight Central and invite them to “Like” Analysights! By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when new information comes out. Check out our Facebook page! You can also follow us on Twitter. Thanks for your help!

Forecast Friday Topic: Detecting Heteroscedasticity – Analytical Approaches

August 19, 2010

(Eighteenth in a series)

Last week, we discussed the violation of the homoscedasticity assumption of regression analysis: the assumption that the error terms have a constant variance. When the error terms do not exhibit a constant variance, they are said to be heteroscedastic. A model that exhibits heteroscedasticity produces parameter estimates that are not biased, but rather inefficient. Heteroscedasticity most often appears in cross-sectional data and is frequently caused by a wide range of possible values for one or more independent variables.

Last week, we showed you how to detect heteroscedasticity by visually inspecting the plot of the error terms against the independent variable. Today, we are going to discuss three simple, but very powerful, analytical approaches to detecting heteroscedasticity: the Goldfeld-Quandt test, the Breusch-Pagan test, and the Park test. These approaches are quite simple, but can be a bid tedious to employ.

Reviewing Our Model

Recall our model from last week. We were trying to determine the relationship between a census tract’s median family income (INCOME) and the ratio of the number of families who own their homes to the number of families who rent (OWNRATIO). Our hypothesis was that census tracts with higher median family incomes had a higher proportion of families who owned their homes. I snatched an example from my college econometrics textbook, which pulled INCOME and OWNRATIOs from 59 census tracts in Pierce County, Washington, which were compiled during the 1980 Census. We had the following data:

Housing Data 

Tract 

Income 

Ownratio 

601 

$24,909 

7.220 

602 

$11,875 

1.094 

603 

$19,308 

3.587 

604 

$20,375 

5.279 

605 

$20,132 

3.508 

606 

$15,351 

0.789 

607 

$14,821 

1.837 

608 

$18,816 

5.150 

609 

$19,179 

2.201 

609 

$21,434 

1.932 

610 

$15,075 

0.919 

611 

$15,634 

1.898 

612 

$12,307 

1.584 

613 

$10,063 

0.901 

614 

$5,090 

0.128 

615 

$8,110 

0.059 

616 

$4,399 

0.022 

616 

$5,411 

0.172 

617 

$9,541 

0.916 

618 

$13,095 

1.265 

619 

$11,638 

1.019 

620 

$12,711 

1.698 

621 

$12,839 

2.188 

623 

$15,202 

2.850 

624 

$15,932 

3.049 

625 

$14,178 

2.307 

626 

$12,244 

0.873 

627 

$10,391 

0.410 

628 

$13,934 

1.151 

629 

$14,201 

1.274 

630 

$15,784 

1.751 

631 

$18,917 

5.074 

632 

$17,431

4.272 

633 

$17,044 

3.868 

634 

$14,870 

2.009 

635 

$19,384 

2.256 

701 

$18,250 

2.471 

705 

$14,212 

3.019 

706 

$15,817 

2.154 

710 

$21,911 

5.190 

711 

$19,282 

4.579 

712 

$21,795 

3.717 

713 

$22,904 

3.720 

713 

$22,507 

6.127 

714 

$19,592 

4.468 

714 

$16,900 

2.110

718 

$12,818 

0.782 

718 

$9,849 

0.259 

719 

$16,931 

1.233 

719 

$23,545 

3.288 

720 

$9,198 

0.235 

721 

$22,190 

1.406 

721 

$19,646 

2.206 

724 

$24,750 

5.650 

726 

$18,140 

5.078 

728 

$21,250 

1.433 

731 

$22,231 

7.452 

731 

$19,788 

5.738 

735 

$13,269 

1.364 

Data taken from U.S. Bureau of Census 1980 Pierce County, WA; Reprinted in Brown, W.S., Introducing Econometrics, St. Paul (1991): 198-200.

And we got the following regression equation:

Ŷ= 0.000297*Income – 2.221

With an R2=0.597, an F-ratio of 84.31, the t-ratios for INCOME (9.182) and the intercept (-4.094) both solidly significant, and the positive sign on the parameter estimate for INCOME, our model appeared to do very well. However, visual inspection of the regression residuals suggested the presence of heteroscedasticity. Unfortunately, visual inspection can only suggest; we need more objective ways of determining the presence of heteroscedasticity. Hence our three tests below.

The Goldfeld-Quandt Test

The Goldfeld-Quandt test is a computationally simple, and perhaps the most commonly used, method for detecting heteroscedasticity. Since a model with heteroscedastic error terms does not have a constant variance, the Goldfeld-Quandt test postulates that the variances associated with high values of the independent variable, X, are statistically significant from those associated with low values. Essentially, you would run separate regression analyses for the low values of X and the high values, and then compare their F-ratios.

The Goldfeld-Quandt test has four steps:

Step #1: Sort the data

Take the independent variable you suspect to be the source of the heteroscedasticity and sort your data set by the X value in low-to-high order:

Housing Data 

Tract 

Income 

Ownratio 

616 

$4,399 

0.022 

614 

$5,090 

0.128 

616 

$5,411 

0.172 

615 

$8,110 

0.059 

720 

$9,198 

0.235 

617 

$9,541 

0.916 

718 

$9,849 

0.259 

613 

$10,063 

0.901 

627 

$10,391 

0.410 

619 

$11,638 

1.019 

602 

$11,875 

1.094 

626 

$12,244 

0.873 

612 

$12,307

1.584 

620 

$12,711 

1.698 

718 

$12,818 

0.782 

621 

$12,839 

2.188 

618 

$13,095 

1.265 

735 

$13,269 

1.364 

628 

$13,934 

1.151 

625 

$14,178 

2.307 

629 

$14,201 

1.274 

705 

$14,212 

3.019 

607 

$14,821 

1.837 

634 

$14,870 

2.009 

610 

$15,075 

0.919 

623 

$15,202 

2.850 

606

$15,351 

0.789 

611 

$15,634 

1.898 

630 

$15,784 

1.751 

706 

$15,817 

2.154 

624 

$15,932 

3.049 

714 

$16,900 

2.110 

719 

$16,931 

1.233 

633 

$17,044 

3.868 

632 

$17,431 

4.272 

726 

$18,140 

5.078 

701 

$18,250 

2.471 

608 

$18,816 

5.150 

631 

$18,917 

5.074 

609 

$19,179

2.201 

711 

$19,282 

4.579 

603 

$19,308 

3.587 

635 

$19,384 

2.256 

714 

$19,592 

4.468 

721 

$19,646 

2.206 

731 

$19,788 

5.738 

605 

$20,132 

3.508 

604 

$20,375 

5.279 

728 

$21,250 

1.433 

609 

$21,434 

1.932 

712 

$21,795 

3.717 

710 

$21,911 

5.190 

721 

$22,190 

1.406 

731 

$22,231 

7.452 

713 

$22,507 

6.127 

713 

$22,904 

3.720 

719 

$23,545 

3.288 

724 

$24,750 

5.650 

601 

$24,909 

7.220 

Step #2: Omit the middle observations

Next, take out the observations in the middle. This usually amounts between one-fifth to one-third of your observations. There’s no hard and fast rule about how many variables to omit, and if your data set is small, you may not be able to omit any. In our example, we can omit 13 observations (highlighted in orange):

Housing Data 

Tract 

Income 

Ownratio 

616

$4,399 

0.022 

614 

$5,090 

0.128 

616 

$5,411 

0.172 

615 

$8,110 

0.059 

720 

$9,198 

0.235 

617 

$9,541 

0.916 

718 

$9,849 

0.259 

613 

$10,063 

0.901 

627 

$10,391 

0.410 

619 

$11,638 

1.019 

602 

$11,875 

1.094 

626 

$12,244 

0.873 

612 

$12,307 

1.584 

620 

$12,711 

1.698 

718 

$12,818 

0.782 

621 

$12,839 

2.188 

618 

$13,095 

1.265 

735 

$13,269 

1.364 

628 

$13,934 

1.151 

625 

$14,178 

2.307 

629 

$14,201 

1.274 

705 

$14,212 

3.019 

607 

$14,821 

1.837 

634 

$14,870 

2.009 

610 

$15,075 

0.919 

623 

$15,202 

2.850 

606 

$15,351 

0.789 

611 

$15,634

1.898 

630 

$15,784 

1.751 

706 

$15,817 

2.154 

624 

$15,932 

3.049 

714 

$16,900 

2.110 

719 

$16,931 

1.233 

633 

$17,044 

3.868 

632 

$17,431 

4.272 

726 

$18,140 

5.078 

Tract 

Income 

Ownratio 

701 

$18,250 

2.471 

608 

$18,816 

5.150 

631 

$18,917 

5.074 

609 

$19,179 

2.201

711 

$19,282 

4.579 

603 

$19,308 

3.587 

635 

$19,384 

2.256 

714 

$19,592 

4.468 

721 

$19,646 

2.206 

731 

$19,788 

5.738 

605 

$20,132 

3.508 

604 

$20,375 

5.279 

728 

$21,250 

1.433 

609 

$21,434 

1.932 

712 

$21,795 

3.717 

710 

$21,911 

5.190 

721 

$22,190 

1.406 

731

$22,231 

7.452 

713 

$22,507 

6.127 

713 

$22,904 

3.720 

719 

$23,545 

3.288 

724 

$24,750 

5.650 

601 

$24,909 

7.220 

 

Step #3: Run two separate regressions, one for the low values, one for the high

We ran separate regressions for the 23 observations with the lowest values for INCOME and the 23 observations with the highest values. In these regressions, we weren’t concerned with whether the t-ratios of the parameter estimates were significant. Rather, we wanted to look at their Error Sum of Squares (ESS). Each model has 21 degrees of freedom.

Step #4: Divide the ESS of the higher value regression by the ESS of the lower value regression, and compare quotient to the F-table.

The higher value regression produced an ESS of 61.489 and the lower value regression produced an ESS of 5.189. Dividing the former by the latter, we get a quotient of 11.851. Now, we need to go to the F-table and check the critical F-value for a 95% significance level and 21 degrees of freedom, which is a value of 2.10. Since our quotient of 11.851 is greater than that of the critical F-value, we can conclude there is strong evidence of heteroscedasticity in the model.

The Breusch-Pagan Test

The Breusch-Pagan test is also pretty simple, but it’s a very powerful test, in that it can be used to detect whether more than one independent variable is causing the heteroscedasticity. Since it can involve multiple variables, the Breusch-Pagan test relies on critical values of chi-squared (χ2) to determine the presence of heteroscedasticity, and works best with large sample sets. There are five steps to the Breusch-Pagan test:

Step #1:
Run the regular regression model and collect the residuals

We already did that.

Step #2: Estimate the variance of the regression residuals

To do this, we square each residual, sum it up and then divide it by the number of observations. Our formula is:

Our residuals and their squares are as follows:

Observation 

Predicted Ownratio 

Residuals 

Residuals Squared 

1 

5.165  

2.055  

4.222 

2 

1.300  

(0.206) 

0.043 

3 

3.504  

0.083  

0.007 

4 

3.821  

1.458  

2.126 

5 

3.749  

(0.241) 

0.058 

6 

2.331  

(1.542) 

2.378 

7 

2.174

(0.337) 

0.113 

8 

3.358  

1.792  

3.209 

9 

3.466  

(1.265) 

1.601 

10 

4.135  

(2.203) 

4.852 

11 

2.249  

(1.330) 

1.769 

12 

2.415  

(0.517) 

0.267 

13 

1.428  

0.156

0.024 

14 

0.763  

0.138  

0.019 

15 

(0.712) 

0.840  

0.705 

16 

0.184  

(0.125) 

0.016 

17 

(0.917) 

0.939  

0.881 

18 

(0.617) 

0.789  

0.622 

19 

0.608  

0.308  

0.095 

20

1.662  

(0.397) 

0.158 

21 

1.230  

(0.211) 

0.045 

22 

1.548  

0.150  

0.022 

23 

1.586  

0.602  

0.362 

24 

2.287  

0.563  

0.317 

25 

2.503  

0.546  

0.298 

26 

1.983  

0.324  

0.105 

27 

1.410  

(0.537) 

0.288 

28 

0.860  

(0.450) 

0.203 

29 

1.911  

(0.760) 

0.577 

30 

1.990  

(0.716) 

0.513 

31 

2.459  

(0.708) 

0.502 

32 

3.388  

1.686  

2.841 

33 

2.948  

1.324  

1.754 

34 

2.833  

1.035  

1.071 

35 

2.188  

(0.179) 

0.032 

36 

3.527  

(1.271) 

1.615 

37 

3.191  

(0.720) 

0.518 

38 

1.993  

1.026

1.052 

39 

2.469  

(0.315) 

0.099 

40 

4.276  

0.914  

0.835 

41 

3.497  

1.082  

1.171 

42 

4.242  

(0.525) 

0.275 

43 

4.571  

(0.851) 

0.724 

44 

4.453  

1.674  

2.802 

45

3.589  

0.879  

0.773 

46 

2.790  

(0.680) 

0.463 

47 

1.580  

(0.798) 

0.637 

48 

0.699  

(0.440) 

0.194 

49 

2.800  

(1.567) 

2.454 

50 

4.761  

(1.473) 

2.169 

51 

0.506

(0.271) 

0.074 

52 

4.359  

(2.953) 

8.720 

53 

3.605  

(1.399) 

1.956 

54 

5.118  

0.532  

0.283 

55 

3.158  

1.920  

3.686 

56 

4.080  

(2.647) 

7.008 

57 

4.371  

3.081  

9.492 

58 

3.647  

2.091  

4.373 

59 

1.714  

(0.350) 

0.122 

Summing the last column, we get 83.591. We divide this by 59, and get 1.417.

Step #3: Compute the square of the standardized residuals

Now that we know the variance of the regression residuals – 1.417 – we compute the standardized residuals by dividing each residual by 1.417 and then squaring the results, so that we get our square of standardized residuals, si2:

Obs. 

Predicted Ownratio 

Residuals 

Standardized Residuals

Square of Standardized Residuals 

1 

5.165  

2.055  

1.450  

2.103  

2 

1.300  

(0.206) 

(0.146) 

0.021  

3 

3.504  

0.083  

0.058

0.003  

4 

3.821  

1.458  

1.029  

1.059  

5 

3.749  

(0.241) 

(0.170) 

0.029  

6 

2.331  

(1.542) 

(1.088) 

1.185  

7 

2.174  

(0.337) 

(0.238) 

0.057  

8 

3.358  

1.792  

1.264  

1.599  

9 

3.466  

(1.265) 

(0.893) 

0.797  

10 

4.135  

(2.203) 

(1.555) 

2.417  

11 

2.249  

(1.330) 

(0.939) 

0.881  

12 

2.415  

(0.517) 

(0.365) 

0.133  

13 

1.428  

0.156  

0.110  

0.012  

14 

0.763  

0.138  

0.097  

0.009  

15 

(0.712) 

0.840  

0.593  

0.351  

16 

0.184  

(0.125)

(0.088) 

0.008  

17 

(0.917) 

0.939  

0.662  

0.439  

18 

(0.617) 

0.789  

0.557  

0.310  

19 

0.608  

0.308  

0.217  

0.047  

20 

1.662  

(0.397) 

(0.280) 

0.079  

21 

1.230  

(0.211) 

(0.149) 

0.022  

22 

1.548  

0.150  

0.106

0.011  

23 

1.586  

0.602  

0.425  

0.180  

24 

2.287  

0.563  

0.397  

0.158  

25 

2.503  

0.546  

0.385  

0.148  

26 

1.983  

0.324  

0.229  

0.052  

27 

1.410  

(0.537) 

(0.379) 

0.143  

28 

0.860  

(0.450) 

(0.318) 

0.101

29 

1.911  

(0.760) 

(0.536) 

0.288  

30 

1.990  

(0.716) 

(0.505) 

0.255  

31 

2.459  

(0.708) 

(0.500) 

0.250  

32 

3.388  

1.686  

1.190  

1.415  

33 

2.948  

1.324  

0.935  

0.874  

34 

2.833  

1.035  

0.730  

0.534  

35 

2.188

(0.179) 

(0.127) 

0.016  

36 

3.527  

(1.271) 

(0.897) 

0.805  

37 

3.191  

(0.720) 

(0.508) 

0.258  

38 

1.993  

1.026

0.724  

0.524  

39 

2.469  

(0.315) 

(0.222) 

0.049  

40 

4.276  

0.914  

0.645  

0.416  

41 

3.497  

1.082  

0.764  

0.584  

42 

4.242  

(0.525) 

(0.370) 

0.137  

43 

4.571  

(0.851) 

(0.600) 

0.361  

44 

4.453  

1.674  

1.182

1.396  

45 

3.589  

0.879  

0.621  

0.385  

46 

2.790  

(0.680) 

(0.480) 

0.231  

47 

1.580  

(0.798) 

(0.563) 

0.317  

48 

0.699  

(0.440) 

(0.311) 

0.097  

49 

2.800  

(1.567) 

(1.106) 

1.223  

50 

4.761  

(1.473) 

(1.040) 

1.081  

51

0.506  

(0.271) 

(0.192) 

0.037  

52 

4.359  

(2.953) 

(2.084) 

4.344  

53 

3.605  

(1.399) 

(0.987) 

0.974  

54 

5.118

0.532  

0.375  

0.141  

55 

3.158  

1.920  

1.355  

1.836  

56 

4.080  

(2.647) 

(1.868) 

3.491  

57 

4.371  

3.081  

2.175  

4.728  

58 

3.647  

2.091  

1.476  

2.179  

59 

1.714  

(0.350) 

(0.247) 

0.061  

 

Step #4: Run another regression with all your independent variables using the sum of standardized residuals as the dependent variable

In this case, we had only one independent variable, INCOME. We will now run a regression substituting the last column of the table above for OWNRATIO, and making it the dependent variable. Again, we’re not interested in the parameter estimates. We are, however, interested in the regression sum of squares (RSS), which is 15.493.

Step #5: Divide the RSS by 2 and compare with the χ2 table’s critical value for the appropriate degrees of freedom

Dividing the RSS by 2, we get 7.747. We look up the critical χ2 value for one degree of freedom and in the table, for a 5% significance level, we get 3.84. Since our χ2 value exceeds our critical, we can conclude there is strong evidence of heteroscedasticity present.

The Park Test

Last, but certainly not least comes the Park test. I saved this one for last because it is the simplest of the three methods and unlike the other two, provides information that can help eliminate the heteroscedasticity. The Park Test assumes there is a relationship between the error variance and one of the regression model’s independent variables. The steps involved are as follows:

Step #1: Run your original regression model and collect the residuals

Done.

Step #2: Square the regression residuals and compute the logs of the squared residuals and the values of the suspected independent variable.

We’ll square the regression residuals, and take their natural log. We will also take the natural log of INCOME:

Tract

Residual Squared

LnResidual Squared

LnIncome

601

4.222

1.440

10.123

602

0.043

(3.157)

9.382

603

0.007

(4.987)

9.868

604

2.126

0.754

9.922

605

0.058

(2.848)

9.910

606

2.378

0.866

9.639

607

0.113

(2.176)

9.604

608

3.209

1.166

9.842

609

1.601

0.470

9.862

609

4.852

1.579

9.973

610

1.769

0.571

9.621

611

0.267

(1.320)

9.657

612

0.024

(3.720)

9.418

613

0.019

(3.960)

9.217

614

0.705

(0.349)

8.535

615

0.016

(4.162)

9.001

616

0.881

(0.127)

8.389

616

0.622

(0.475)

8.596

617

0.095

(2.356)

9.163

618

0.158

(1.847)

9.480

619

0.045

(3.112)

9.362

620

0.022

(3.796)

9.450

621

0.362

(1.015)

9.460

623

0.317

(1.148)

9.629

624

0.298

(1.211)

9.676

625

0.105

(2.255)

9.559

626

0.288

(1.245)

9.413

627

0.203

(1.596)

9.249

628

0.577

(0.549)

9.542

629

0.513

(0.668)

9.561

630

0.502

(0.689)

9.667

631

2.841

1.044

9.848

632

1.754

0.562

9.766

633

1.071

0.069

9.744

634

0.032

(3.437)

9.607

635

1.615

0.479

9.872

701

0.518

(0.658)

9.812

705

1.052

0.051

9.562

706

0.099

(2.309)

9.669

710

0.835

(0.180)

9.995

711

1.171

0.158

9.867

712

0.275

(1.289)

9.989

713

0.724

(0.323)

10.039

713

2.802

1.030

10.022

714

0.773

(0.257)

9.883

714

0.463

(0.770)

9.735

718

0.637

(0.452)

9.459

718

0.194

(1.640)

9.195

719

2.454

0.898

9.737

719

2.169

0.774

10.067

720

0.074

(2.608)

9.127

721

8.720

2.166

10.007

721

1.956

0.671

9.886

724

0.283

(1.263)

10.117

726

3.686

1.305

9.806

728

7.008

1.947

9.964

731

9.492

2.250

10.009

731

4.373

1.476

9.893

735

0.122

(2.102)

9.493

 

Step #3: Run the regression equation using the log of the squared residuals as the dependent variable and the log of the suspected independent variable as the dependent variable

That results in the following regression equation:

Ln(e2) = 1.957(LnIncome) – 19.592

Step #4: If the t-ratio for the transformed independent variable is significant, you can conclude heteroscedasticity is present.

The parameter estimate for the LnIncome is significant, with a t-ratio of 3.499, so we conclude heteroscedasticity.

Next Forecast Friday Topic: Correcting Heteroscedasticity

Thanks for your patience! Now you know the three most common methods for detecting heteroscedasticity: the Goldfeld-Quandt test, the Breusch-Pagan test, and the Park test. As you will see in next week’s Forecast Friday post, the Park test will be beneficial in helping us eliminate the heteroscedasticity. We will discuss the most common approach to correcting heteroscedasticity: weighted least squares (WLS) regression, and show you how to apply it. Next week’s Forecast Friday post will conclude our discussion of regression violations, and allow us to resume discussions of more practical applications in forecasting.

*************************

Help us Reach 200 Fans on Facebook by Tomorrow!

Thanks to all of you, Analysights now has over 160 fans on Facebook! Can you help us get up to 200 fans by tomorrow? If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! And if you like us that much, please also pass these posts on to your friends who like forecasting and invite them to “Like” Analysights! By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when new information comes out. Check out our Facebook page! You can also follow us on Twitter. Thanks for your help!