Posts Tagged ‘parameter estimates’

Forecast Friday Topic: Correcting Heteroscedasticity

August 26, 2010

(Nineteenth in a series)

In last week’s Forecast Friday post, we discussed the three most commonly used analytical approaches to detecting heteroscedasticity: the Goldfeld-Quandt test, the Breusch-Pagan test, and the Park test. We continued to work with our data set of 59 census tracts in Pierce County, WA, from which we were trying to determine what, if any, influence the tract’s median family income had on the ratio of the number of families in the tract who own their home to the number of families who rent. As we saw, heteroscedasticity was present in our model, caused largely by the wide variation in income from one census tract to the other.

Recall that while INCOME, for the most part, had a positive relationship with the OWNRATIO, yet we found many census tracts that despite having high median family incomes had low OWNRATIOs. This is because unlike low-income families whose housing options are limited, high income families have several more housing options. The fact that the wealthier census tracts have more options increases the variability within the relationship between INCOME and OWNRATIO, causing us to generate errors that don’t have a constant variance and produce forecasts with parameter estimates that don’t seem to make sense.

Today, we turn our attention to correcting heteroscedasticity, and we will do that by transforming our model using Weighted Least Squares (WLS) regression. And we’ll show how our results from the Park test can enable us to approximate the weights to use in our WLS model.

Weighted Least Squares Regression

The reason wide variances in the value of one or more independent variables cause heteroscedastic errors is because the regression model places heavier weight on extreme values. By weighting each observation in the data set, we eliminate that tendency. But how do we know what weights to use? That depends on whether the variances of each individual observation are known or unknown.

If the variances are known, then you would simply divide each observation by its standard deviation and then run your regression to get a transformed model. Rarely, however, is the individual variance known, so we need to apply a more intricate approach.

Returning to our housing model, our regression equation was:

Ŷ= 0.000297*Income – 2.221

With an R2=0.597, an F-ratio of 84.31, and t-ratios of 9.182 for INCOME and -4.094 for the intercept.

We know that INCOME, our independent variable, is the source of the heteroscedasticity. Let’s also assume that the “correct” housing model is also of a linear functional form like our model above. In this case, we would divide each observation’s dependent variable (OWNRATIO) value by the value of its independent variable, INCOME, forming a new dependent variable (OwnRatio_Income) and then take the reciprocal of the INCOME value, and form a new independent variable, IncomeReciprocal.

Recalling the Park Test

How do we know to choose the reciprocal? Remember when we did the Park test last week? We got the following equation:

Ln(e2) = 1.957(LnIncome) – 19.592

The parameter estimate for LnIncome is 1.957. The Park test assumes that the variance of the heteroscedastic error is equal to the variance of the homoscedastic error times Xi raised to an exponent. That coefficient represents the exponent to which our independent variable Xi is raised. Since the Park test is performed by regressing a double log function, we divide that coefficient by two to arrive at the exponent of the Xi value by which to weight our observations:

Essentially, we are saying that:

Var(heterosc. errors in housing model) = var(homosc. errors in housing model)1.957

For simplicity’s sake, let’s round the coefficient from 1.957 to 2. Hence, we divide our dependent variable by Xi2/Xi = Xi , and our independent variable by its reciprocal:

Estimating the Housing Model Using WLS

We weight the values for each census tract’s housing data accordingly:

OwnRatio_Income

IncomeReciprocal

0.000290

0.000040

0.000092

0.000084

0.000186

0.000052

0.000259

0.000049

0.000174

0.000050

0.000051

0.000065

0.000124

0.000067

0.000274

0.000053

0.000115

0.000052

0.000090

0.000047

0.000061

0.000066

0.000121

0.000064

0.000129

0.000081

0.000090

0.000099

0.000025

0.000196

0.000007

0.000123

0.000005

0.000227

0.000032

0.000185

0.000096

0.000105

0.000097

0.000076

0.000088

0.000086

0.000134

0.000079

0.000170

0.000078

0.000187

0.000066

0.000191

0.000063

0.000163

0.000071

0.000071

0.000082

0.000039

0.000096

0.000083

0.000072

0.000090

0.000070

0.000111

0.000063

0.000268

0.000053

0.000245

0.000057

0.000227

0.000059

0.000135

0.000067

0.000116

0.000052

0.000135

0.000055

0.000212

0.000070

0.000136

0.000063

0.000237

0.000046

0.000237

0.000052

0.000171

0.000046

0.000162

0.000044

0.000272

0.000044

0.000228

0.000051

0.000125

0.000059

0.000061

0.000078

0.000026

0.000102

0.000073

0.000059

0.000140

0.000042

0.000026

0.000109

0.000063

0.000045

0.000112

0.000051

0.000228

0.000040

0.000280

0.000055

0.000067

0.000047

0.000335

0.000045

0.000290

0.000051

0.000103

0.000075

 

And we run a regression, to get a model of this form:

 

OwnRatio_Incomei = α* + β1*IncomeReciprocali + εi*

Notice the asterisks for each of the parameter estimates. They denote the transformed model. Performing our transformed regression, we get:



We get an R2 of .596 for the transformed model, not much different from that of our original model. However, notice the intercept of our transformed model and look at the coefficient of INCOME from our original model. Notice that they are almost equal. That’s because when you divided each observation by Xi , you essentially divided 0.000297*INCOME by INCOME, turning the slope into the intercept! Since heteroscedasticity doesn’t bias parameter estimates, we would expect the slope of our original model and the intercept of our transformed model to be equivalent. This is because those parameter estimates are averages. Heteroscedasticity doesn’t bias the average, but the variance.

Note the t-ratio for the intercept in our transformed model is much stronger than that of the coefficient for INCOME in our transformed model (12.19 vs. 9.182), suggesting that the transformed model has generated a more efficient estimate of the slope parameter. That’s because the standard error of the estimate (read VARIANCE) is smaller in our transformed model. We divide the parameter estimate by the standard error of the estimate to get our t-ratios. Because the standard error is smaller, our estimates are more trustworthy.

Recap

This concludes our discussions of all the violations that can occur with regression analysis and the problems these violations can cause. You now understand that omitting important independent variables, multicollinearity, autocorrelation, and heteroscedasticity can all cause you to generate models that produce unacceptable forecasts and prediction. You now know how to diagnose these violations and how to correct them. One thing you’ve probably also noticed as we went through these discussions is that data is never perfect. No matter how good our data is, we must still work with it and adapt it in a way that we can derive actionable insights from it.

Forecast Friday Will Resume Two Weeks From Today

Next week is the weekend before Labor Day, and I am forecasting that many of you will be leaving the office early for the long weekend, so I have decided to make the next edition of Forecast Friday for September 9. The other two posts that appear earlier in the week will continue as scheduled. Beginning with the September 9 Forecast Friday post, we will talk about additional regression analysis topics that are much less theoretical than these last few posts’ topics, and much more practical. Until then, Analysights wishes you and your family a great Labor Day weekend!

****************************************************

Help us Reach 200 Fans on Facebook by Tomorrow!
Thanks to all of you, Analysights now has over 160 fans on Facebook! Can you help us get up to 200 fans by tomorrow? If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! And if you like us that much, please also pass these posts on to your friends who like forecasting and invite them to “Like” Analysights! By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when new information comes out. Check out our Facebook page! You can also follow us on Twitter. Thanks for your help!

Advertisements

Forecast Friday Topic: Heteroscedasticity

August 12, 2010

(Seventeenth in a series)

Recall that one of the important assumptions in regression analysis is that a regression equation exhibit homoscedasticity: the condition that the error terms have a constant variance. Today we discuss heteroscedasticity, the violation of that assumption.

Heteroscedasticity, like autocorrelation and multicollinearity, results in inefficient parameter estimates. The standard errors of the parameter estimates tend to be biased, which means that the t-ratios and confidence intervals calculated around the suspect independent variable will not be valid, and will generate dubious predictions.

Heteroscedasticity occurs mostly in cross-sectional, as opposed to time series, data and mostly in large data sets. When data sets are large, the range of values for an independent variable can be quite wide. This is especially the case in data where income or other measures of wealth are used as independent variables. Persons with low income have few options about how to spend their money while persons with high incomes have many options. If you were trying to predict that the conviction rate for crimes was different in low income counties vs. high income counties, your model may exhibit heteroscedasticity because a low-income person may not have the funds for an adequate defense, and may be restricted to a public defender, or other inexpensive attorney. A wealthy individual, on the other hand, can hire the very best defense lawyer money could buy; or he could choose an inexpensive lawyer, or even the public defender. The wealthy individual may even be able to make restitution in lieu of a conviction.

How does this disparity affect your model? Recall from our earlier discussions on regression analysis that the least-squares method places more weight on extreme values. When outliers exist in data, they generate large residuals that get scattered out from those of the remaining observations. While heteroscedastic error terms will still have a mean of zero, their variance is greatly out of whack, resulting in inefficient parameter estimates.

In today’s Forecast Friday post, we will look at a data set for a regional housing market, perform a regression, and show how to detect heteroscedasticity visually.

Heteroscedasticity in the Housing Market

The best depiction of heteroscedasticity comes from my college econometrics textbook, Introducing Econometrics, by William S. Brown. In the chapter on heteroscedasticity, Brown provides a data set of housing statistics from the 1980 Census for Pierce County, Washington, which I am going to use for our model. The housing market is certainly one market where heteroscedasticity is deeply entrenched, since there is a dramatic range for both incomes and home market values. In our data set, we have 59 census tracts within Pierce County. Our independent variable is the median family income for the census tract; our dependent variable is the OwnRatio – the ratio of the number of families who own their homes to the number of families who rent. Our data set is as follows:

Housing Data

Tract

Income

Ownratio

601

$24,909

7.220

602

$11,875

1.094

603

$19,308

3.587

604

$20,375

5.279

605

$20,132

3.508

606

$15,351

0.789

607

$14,821

1.837

608

$18,816

5.150

609

$19,179

2.201

609

$21,434

1.932

610

$15,075

0.919

611

$15,634

1.898

612

$12,307

1.584

613

$10,063

0.901

614

$5,090

0.128

615

$8,110

0.059

616

$4,399

0.022

616

$5,411

0.172

617

$9,541

0.916

618

$13,095

1.265

619

$11,638

1.019

620

$12,711

1.698

621

$12,839

2.188

623

$15,202

2.850

624

$15,932

3.049

625

$14,178

2.307

626

$12,244

0.873

627

$10,391

0.410

628

$13,934

1.151

629

$14,201

1.274

630

$15,784

1.751

631

$18,917

5.074

632

$17,431

4.272

633

$17,044

3.868

634

$14,870

2.009

635

$19,384

2.256

701

$18,250

2.471

705

$14,212

3.019

706

$15,817

2.154

710

$21,911

5.190

711

$19,282

4.579

712

$21,795

3.717

713

$22,904

3.720

713

$22,507

6.127

714

$19,592

4.468

714

$16,900

2.110

718

$12,818

0.782

718

$9,849

0.259

719

$16,931

1.233

719

$23,545

3.288

720

$9,198

0.235

721

$22,190

1.406

721

$19,646

2.206

724

$24,750

5.650

726

$18,140

5.078

728

$21,250

1.433

731

$22,231

7.452

731

$19,788

5.738

735

$13,269

1.364

Data taken from U.S. Bureau of Census 1980 Pierce County, WA; Reprinted in Brown, W.S., Introducing Econometrics, St. Paul (1991): 198-200.

When we run our regression, we get the following equation:

Ŷ= 0.000297*Income – 2.221

Both the intercept and independent variable’s parameter estimates are significant, with the intercept parameter having a t-ratio of -4.094 and the income estimate having one of 9.182. R2 is 0.597, and the F-statistic is a strong 84.31. The model seems to be pretty good – strong t-ratios and F-statistic, a high coefficient of determination, and the sign on the parameter estimate for Income is positive, as we would expect. Generally, the higher the income, the greater the Own-to-rent ratio. So far so good.

The problem comes when we do a visual inspection of our data: first the independent variable against the dependent variable and the independent variable against the regression residuals. First, let’s take a look at the scatter plot of Income and OwnRatio:

Without even looking at the residuals, we can see that as median family income increases, the data points begin to spread out. Look at what happens to the distance between data points above and below the line when median family incomes reach $20,000: OwnRatios vary drastically.

Now let’s plot Income against the regression’s residuals:

This scatter plot shows essentially the same phenomenon as the previous graph, but from a different perspective. We can clearly see the error terms fanning out as Income increases. In fact, we can see the residuals diverging at increasing rates once Income starts moving from $10,000 to $15,000, and just compounding as incomes go higher. Roughly half the residuals fall on both the positive and the negative side, allowing us to meet the regression assumption of our residuals having a mean of zero, hence our parameter estimates are not biased. However, because we violated the constant variance assumption, the standard error of our regression is biased, so our parameter estimates are suspect.

Visual Inspection Only Gets You So Far

By visually inspecting our residuals, we can clearly see that our error terms are not homoscedastic. When you have a regression model, especially for cross-sectional data sets like this, you should visually inspect every independent variable against the dependent variable and against the error terms in order to get a priori indication of heteroscedasticity. However, visual inspection alone is not a guarantee that heteroscedasticity exists. There are three particularly simple methods to detecting heteroscedasticity which we will discuss in next week’s Forecast Friday post: the Park Test, the Goldfeld-Quandt Test, and the Breusch-Pagan Test.

*************************

Help us Reach 200 Fans on Facebook by Tomorrow!

Thanks to all of you, Analysights now has 150 fans on Facebook! Can you help us get up to 200 fans by tomorrow? If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! And if you like us that much, please also pass these posts on to your friends who like forecasting and invite them to “Like” Analysights!  By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when new information comes out. Check out our Facebook page! You can also follow us on Twitter.   Thanks for your help!

Forecast Friday Topic: Multicollinearity – Correcting and Accepting it

July 22, 2010

(Fourteenth in a series)

In last week’s Forecast Friday post, we discussed how to detect multicollinearity in a regression model and how dropping a suspect variable or variables from the model can be one approach to reducing or eliminating multicollinearity. However, removing variables can cause other problems – particularly specification bias – if the suspect variable is indeed an important predictor. Today we will discuss two additional approaches to correcting multicollinearity – obtaining more data and transforming variables – and will discuss when it’s best to just accept the multicollinearity.

Obtaining More Data

Multicollinearity is really an issue with the sample, not the population. Sometimes, sampling produces a data set that might be too homogeneous. One way to remedy this would be to add more observations to the data set. Enlarging the sample will introduce more variation in the data series, which reduces the effect of sampling error and helps increase precision when estimating various properties of the data. Increased sample sizes can reduce either the presence or the impact of multicollinearity, or both. Obtaining more data is often the best way to remedy multicollinearity.

Obtaining more data does have problems, however. Sometimes, additional data just isn’t available. This is especially the case with time series data, which can be limited or otherwise finite. If you need to obtain that additional information through great effort, it can be costly and time consuming. Also, the additional data you add to your sample could be quite similar to your original data set, so there would be no benefit to enlarging your data set. The new data could even make problems worse!

Transforming Variables

Another way statisticians and modelers go about eliminating multicollinearity is through data transformation. This can be done in a number of ways.

Combine Some Variables

The most obvious way would be to find a way to combine some of the variables. After all, multicollinearity suggests that two or more independent variables are strongly correlated. Perhaps you can multiply two variables together and use the product of those two variables in place of them.

So, in our example of the donor history, we had the two variables “Average Contribution in Last 12 Months” and “Times Donated in Last 12 Months.” We can multiply them to create a composite variable, “Total Contributions in Last 12 Months,” and then use that new variable, along with the variable “Months Since Last Donation” to perform the regression. In fact, if we did that with our model, we end up with a model (not shown here) that has an R2=0.895, and this time the coefficient for “Months Since Last Donation” is significant, as is our “Total Contribution” variable. Our F statistic is a little over 72. Essentially, the R2 and F statistics are only slightly lower than in our original model, suggesting that the transformation was useful. However, looking at the correlation matrix, we still see a strong negative correlation between our two independent variables, suggesting that we still haven’t eliminated multicollinearity.

Centered Interaction Terms

Sometimes we can reduce multicollinearity by creating an interaction term between variables in question. In a model trying to predict performance on a test based on hours spent studying and hours of sleep, you might find that hours spent studying appears to be related with hours of sleep. So, you create a third independent variable, Sleep_Study_Interaction. You do this by computing the average value for both the hours of sleep and hours of studying variables. For each observation, you subtract each independent variable’s mean from its respective value for that observation. Once you’ve done that for each observation, multiply their differences together. This is your interaction term, Sleep_Study_Interaction. Run the regression now with the original two variables and the interaction term. When you subtract the means from the variables in question, you are in effect centering interaction term, which means you’re taking into account central tendency in your data.

Differencing Data

If you’re working with time series data, one way to reduce multicollinearity is to run your regression using differences. To do this, you take every variable – dependent and independent – and, beginning with the second observation – subtract the immediate prior observation’s values for those variables from the current observation. Now, instead of working with original data, you are working with the change in data from one period to the next. Differencing eliminates multicollinearity by removing the trend component of the time series. If all independent variables had followed more or less the same trend, they could end up highly correlated. Sometimes, however, trends can build on themselves for several periods, so multiple differencing may be required. In this case, subtracting the period before was taking a “first difference.” If we subtracted two periods before, it’s a “second difference,” and so on. Note also that with differencing, we lose the first observations in the data, depending on how many periods we have to difference, so if you have a small data set, differencing can reduce your degrees of freedom and increase your risk of making a Type I Error: concluding that an independent variable is not statistically significant when, in truth it is.

Other Transformations

Sometimes, it makes sense to take a look at a scatter plot of each independent variable’s values with that of the dependent variable to see if the relationship is fairly linear. If it is not, that’s a cue to transform an independent variable. If an independent variable appears to have a logarithmic relationship, you might substitute its natural log. Also, depending on the relationship, you can use other transformations: square root, square, negative reciprocal, etc.

Another consideration: if you’re predicting the impact of violent crime on a city’s median family income, instead of using the number of violent crimes committed in the city, you might instead divide it by the city’s population and come up with a per-capita figure. That will give more useful insights into the incidence of crime in the city.

Transforming data in these ways helps reduce multicollinearity by representing independent variables differently, so that they are less correlated with other independent variables.

Limits of Data Transformation

Transforming data has its own pitfalls. First, transforming data also transforms the model. A model that uses a per-capita crime figure for an independent variable has a very different interpretation than one using an aggregate crime figure. Also, interpretations of models and their results get more complicated as data is transformed. Ideally, models are supposed to be parsimonious – that is, they explain a great deal about the relationship as simply as possible. Typically, parsimony means as few independent variables as possible, but it also means as few transformations as possible. You also need to do more work. If you try to plug in new data to your resulting model for forecasting, you must remember to take the values for your data and transform them accordingly.

Living With Multicollinearity

Multicollinearity is par for the course when a model consists of two or more independent variables, so often the question isn’t whether multicollinearity exists, but rather how severe it is. Multicollinearity doesn’t bias your parameter estimates, but it inflates their variance, making them inefficient or untrustworthy. As you have seen from the remedies offered in this post, the cures can be worse than the disease. Correcting multicollinearity can also be an iterative process; the benefit of reducing multicollinearity may not justify the time and resources required to do so. Sometimes, any effort to reduce multicollinearity is futile. Generally, for the purposes of forecasting, it might be perfectly OK to disregard the multicollinearity. If, however, you’re using regression analysis to explain relationships, then you must try to reduce the multicollinearity.

A good approach is to run a couple of different models, some using variations of the remedies we’ve discussed here, and comparing their degree of multicollinearity with that of the original model. It is also important to compare the forecast accuracy of each. After all, if all you’re trying to do is forecast, then a model with slightly less multicollinearity but a higher degree of forecast error is probably not preferable to a more precise forecasting model with higher degrees of multicollinearity.

The Takeaways:

  1. Where you have multiple regression, you almost always have multicollinearity, especially in time series data.
  2. A correlation matrix is a good way to detect multicollinearity. Multicollinearity can be very serious if the correlation matrix shows that some of the independent variables are more highly correlated with each other than they are with the dependent variable.
  3. You should suspect multicollinearity if:
    1. You have a high R2 but low t-statistics;
    2. The sign for a coefficient is opposite of what is normally expected (a relationship that should be positive is negative, and vice-versa).
  4. Multicollinearity doesn’t bias parameter estimates, but makes them untrustworthy by enlarging their variance.
  5. There are several ways of remedying multicollinearity, with obtaining more data often being the best approach. Each remedy for multicollinearity contributes a new set of problems and limitations, so you must weigh the benefit of reduced multicollinearity on time and resources needed to do so, and the resulting impact on your forecast accuracy.

Next Forecast Friday Topic: Autocorrelation

These past two weeks, we discussed the problem of multicollinearity. Next week, we will discuss the problem of autocorrelation – the phenomenon that occurs when we violate the assumption that the error terms are not correlated with each other. We will discuss how to detect autocorrelation, discuss in greater depth the Durbin-Watson statistic’s use as a measure of the presence of autocorrelation, and how to correct for autocorrelation.

*************************

If you Like Our Posts, Then “Like” Us on Facebook and Twitter!

Analysights is now doing the social media thing! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when other information comes out. Check out our Facebook page! You can also follow us on Twitter.

Forecast Friday Topic: Multicollinearity – How to Detect it; How to Correct it

July 15, 2010

(Thirteenth in a series)

In last week’s Forecast Friday post, we explored how to perform regression analysis using Excel. We looked at the giving history of 20 contributors to a nonprofit organization, and developed a model based on the recency, frequency, and monetary value (RFM) of their past donations. We derived the following regression equation:

We were pleased to see that our model had a coefficient of determination – or R2=0.933, indicating that our model explained 93.3% of the change in the donor’s current contribution (our Ŷ). But we were a little disheartened when we looked at the t-statistics of each of our regression coefficients. Recall that we found our recency coefficient was not significant:

Parameter

Coefficient

T-statistic

Significant?

Intercept

87.27

4.32

Yes

Months since Last

(1.80)

(1.44)

No

Times Donated

2.45

2.87

Yes

Average Contribution

0.35

3.26

Yes

Yet, most direct marketing professionals know clearly that RFM theory postulates that all three variables are significant indicators of whether and how much a donor will give (or a customer will buy). When our model doesn’t replicate what a tried and true theory has long maintained, there could possibly be something wrong.

Multicollinearity

Most times, when something doesn’t look right in the results of a regression model, it is safe to assume that one of the regression assumptions has been violated. The problem is trying to determine which assumption – or assumptions – was violated. Since the coefficient for “Months Since Last Contribution” has a t-statistic that indicates it isn’t statistically significant, we might suspect that the specification assumption is violated: that is, we may believe that “Months Since Last Contribution” is an extraneous, irrelevant variable that should not have been included in the model and, thus, be removed.

But is that really the case? There can be other reasons why a parameter estimate does not come up significant. If two or more independent variables are highly correlated, the resulting multicollinearity can cause the regression model to assign a statistically insignificant parameter estimate to an important independent variable. So, how can we detect multicollinearity?

Detecting Multicollinearity: Correlation Matrix

The first step in detecting multicollinearity is to examine the correlation among the independent variables. We do this by looking at a correlation matrix. You can run a correlation matrix in Excel by using its Data Analysis ToolPak. Looking at the correlation matrix for our variables, we find:

Correlation Matrix – Original Variables

Variable

Contribution Y

Months Since Last Donation X1

Times Donated in last 12 months

X2

Average Contribution in last 12 months

X3

Contribution (Y)

1.00

  

  

  

Months Since Last Donation – X1

-0.93

1.00

  

  

Times Donated in last 12 months – X2

0.89

-0.88

1.00

  

Average Contribution Last 12 mo. – X3

0.88

-0.84

0.69

1.00

 

A correlation of 1.00 means two variables are perfectly correlated; a correlation of 0.00 means there is absolutely no correlation. The cells in the matrix above, where the correlation is 1.00, shows the correlation of an independent variable with itself – we would expect a perfectly correlated relationship. What is most important to us are the numbers below the 1.00 correlations. The first column shows our dependent variable, “Contribution”. As you go down the column, row by row, you see that each of our independent variables is strongly correlated with the dependent variable, indicating that they are all strong predictors.

The correlation between “Months Since Last Donation” (X1) and the donor’s Contribution (Y) shows a correlation that is almost perfectly negative (-0.93), while those correlations of the dependent variable with each of the other two independent variables is almost perfectly positive with the contribution (0.89 and 0.88). When writing these in shorthand, we use the Greek letter rho, ρ, to denote correlation. Hence, to show the correlation between each independent variable with the dependent variable, we would express them as follows:

ρX1Y = -0.93

ρX2Y = 0.89

ρX3Y = 0.88

But now, let’s look at the correlations among our independent variables:

ρX1X2= -0.88

ρX1X3= -0.84

ρX2X3= 0.69

 

Notice that all of our independent variables are highly correlated with one another. The relationship between “Times Donated in Last 12 Months” and “Average Contribution in Last 12 Months” is not as strong as the correlation between those individual variables with “Months Since Last Donation,” but the correlation is still very strong.

Hence, we can conclude that multicollinearity is present in this model.

Correcting Multicollinearity: Dropping Variables

In today’s post, we will discuss one of the remedies for multicollinearity – dropping a highly correlated independent variable. Next week, we’ll discuss the other approaches to correcting multicollinearity. Sometimes, when a variable is “iffy,” we can save ourselves some trouble and just kick it out. If we were to ignore “Months Since Last Donation,” and run our regression with the remaining two variables, we end up with the following regression equation:

Ŷ= 60.68 + 3.37X2 + 0.45X3

We get R2 =0.924, suggesting that we didn’t lose much explanatory power by excluding “Months Since Last Donation.” We also get an F statistic of 103.36, much higher than the 73.90 we had in our original model. A higher F-statistic indicates a model that is more statistically valid. It also reflects the exclusion of one or more extraneous variables. Also, the t-statistics for both independent variables are significant, and they’re even higher than they were in the original model, further indicating increased validity:

Parameter

Coefficient

T-statistic

Significant?

Intercept

60.68

7.24

Yes

Times Donated

3.37

5.83

Yes

Average Contribution

0.45

5.49

Yes

Dropping “Months Since Last Donation” from our analysis worked here. However, dropping variables without a rational decision process can cause new problems. In some cases, dropping a variable can result in specification bias, as we saw in our previous example of predicting profit margin for savings and loan associations a few weeks ago. So, consider dropping variables cautiously.

Next Forecast Friday Topic: More Multicollinearity Remedies

Today, we described one of the ways to remedy multicollinearity – dropping variables. Next week, we will explore two other ways of correcting multicollinearity: obtaining more data and transforming variables. We will also discuss the pitfalls of all three of these remedies, and we will discuss when it’s not worth it to reduce the impact of multicollinearity.

*************************************

Let Analysights Take the Pain out of Forecasting!

Multicollinearity is but one of the many problems you can encounter when forecasting. Let Analysights walk you through the forecasting process so that you can spend more time making strategic decisions and less time trying to guess first where business is going. We will make your forecasting efforts seamless, so you can concentrate on running your business. Check out our Web site or call (847) 895-2565.

Forecast Friday Topic: Building Regression Models With Excel

July 8, 2010
 (Twelfth in a series)
We’ve spent the last six weeks discussing regression analysis as a forecasting method. As you have seen, simple regression is a bit tedious to work out by hand, but for multiple regression analysis, you almost always need the aid of a computerized software package. Today I will demonstrate for you how to use the Regression Analysis feature of Microsoft Excel’s1 Analysis ToolPak Add-In. Excel 2007 comes with the Analysis ToolPak Add-In, which you can choose to activate. One way to know if Analysis ToolPak is activated on your version of Excel is to click on the Data tab on your workspace and see if there is a “Data Analysis” icon. The following thumbnail will illustrate:

 

Seeing if the Data Analysis ToolPak Add-In is activated.

Checking for the Data Analysis Add-In

Notice towards the upper right corner of the image, “Data Analysis” is highlighted in orange. The presence of the Data Analysis icon means that we have activated the Analysis ToolPak Add-In. If it wasn’t there, you would need to activate the Add-In, which you could do very easily by clicking the “Office” button in the top left hand corner, then clicking the “Excel Options” button, which will take you through the process of activating your add-in.

Setting up a Regression – Our Data Set

Generally in direct mail marketing, three components that often determine how much one spends – or whether he/she buys at all – are recency, frequency, and monetary value – known in short as RFM. Generally, the longer it has been since one’s last purchase (recency), the less he/she is likely to spend. Hence, we would expect a minus sign by the coefficient for recency. Also, RFM theorizes that the more frequently one buys, the greater his/her purchase. So, we would expect a plus sign by the coefficient for frequency. Finally, the higher the customer’s average purchases (monetary value), the greater his/her spending, so we would also expect a plus sign here. RFM is also used heavily by nonprofits in their capital and contributor campaigns, since they are often heavily reliant upon direct mail.

In our example here, a local nonprofit decided to test whether each RFM component had a relationship to a donor’s contribution, so it randomly selected 20 donors who contributed to its last appeal. Naturally, our dependent variable, Y, was the Contribution amount. The nonprofit also looked at three independent variables: months since last contribution (X1), times donated in last 12 months (X2), and average contribution over the last 12 months (X3). These independent variables represent recency, frequency, and monetary value, respectively. The table below shows our dataset:

Giving Patterns of 20 Donors

Donor

Contribution

Months Since Last Donation

Times Donated in last 12 months

Average Contribution in last 12 months

1

95

10

1

85

2

110

8

2

95

3

100

10

2

90

4

115

8

3

75

5

100

9

1

95

6

120

6

2

100

7

105

9

1

90

8

125

10

1

125

9

105

9

2

100

10

130

4

3

150

11

135

7

4

125

12

150

2

8

150

13

140

4

3

125

14

155

2

9

140

15

140

2

8

130

16

160

2

10

150

17

145

3

6

135

18

165

1

12

150

19

150

3

4

160

20

170

1

12

140

The thumbnail below shows what the data set looks like in Excel:

Data Set in Excel

Regression Data Set in Excel

Running the Regression

To run the regression, we need to select the regression tool from the Analysis ToolPak. We do this by clicking on the Data Analysis Tab. The  next thumbnail shows us what we need to do:

Regression Option

Selecting the Regression option from the Data Analysis ToolPak

After selecting the regression tool, we need to select our independent variables and our dependent variables. It is best to make sure all columns containing your independent variables are adjacent to each other, as they are in columns D, E, and F. Notice that column C from rows 2 to 22 contains our Y-range values (including the column label). In columns D, E, and F, rows 2 through 22 contain their respective X-range values. Notice in the thumbnail how we indicate those column/row positions for Y-range and X-range values.

Regression Options

Regression Options

We also need to decide where to place the regression output and what data we want the output to contain. In the thumbnail below, we choose to have the output placed in a new worksheet, called “Regression Output”, and we also check the box indicating that we want the residuals printed. Also notice that we checked the box “Labels”, so that row 2 won’t be inadvertently added into the model.

Regression Options - Continued

Regression Options - Continued

Looking at the Output

Now we run the regression and get the following output:

Regression Output

Regression Output (residuals not shown)

As you can see, cell B5 contains our R2, equal to .933, indicating that 93.3% of the variation in a donor’s contribution amount is explained by changes in recency, frequency, and monetary value. Also, notice the F-statistic in cell E12. It’s a large, strong 73.90, and cell F12 to the right is 0.00, suggesting the model is significant. (Note, the Significance F in cell F12 and the P-Values in cells E17-E20 for each parameter estimate are quick cues to significance. If you’re using a 95% confidence interval – which we are here – then you want those values to be no higher than 0.05).

Now let’s look at each parameter estimate. Cells B17-B20 contain our regression coefficients. We have the following equation:

Contribution Estimate = 87.27 – 1.80 *Months_Since_Last_Donation + 2.45 *Times_Donated_Last_12_Months + 0.35 *Average_Contribution_Last_12_Months

Simplifying, we have:

Contribution Estimate = 87.27 – 1.80*RECENCY + 2.45*FREQUENCY + 0.35*MONETARY_VALUE

Ŷ = 87.27 – 1.80X1 + 2.45X2 + 0.35X3

Note that even though we opted to display the residuals for each observation, I chose not to show them here.  It would have run below the fold, and would have been difficult to see.  Besides, for our analysis, we’re not going to worry about residuals right now.

Interpreting the Output

As we can see, each month since a donor’s last contribution reduces his contribution by an average of $1.80, when we hold frequency and monetary value constant. Likewise, for each time a donor has given in the last 12 months, the size of his contribution increases by an average of $2.45, holding the other two variables constant. In addition, each one-dollar increase in a donor’s average contribution increases his contribution by an average of 35 cents. Hence, all of our coefficients have the signs we expect.

T-Statistics and P-Values

Next, we need to look at the t-statistics and P-values. As mentioned above, for a 95% confidence interval, a parameter estimate must have a p-value no greater than 0.05 (or 0.10 for a 90% confidence interval, etc.), in order to be significant. In like manner, for a 95% confidence interval, t-statistics should be values of at least 1.96 (slightly higher for small samples, but 1.96 will work) or less than -1.96 if the coefficient is negative, to be significant:

Parameter

Coefficient

T-statistic

Significant?

Intercept

87.27

4.32

Yes

Months since Last

(1.80)

(1.44)

No

Times Donated

2.45

2.87

Yes

Average Contribution

0.35

3.26

Yes

Notice that the coefficient for Months Since Last Donation has a t-statistic of -1.44. It is not significant. Another way to tell whether the parameter estimates are significant is to look at the Lower 95% and Upper 95% values in columns F and G.  If the lower and upper 95% confidence interval values for a parameter estimate are both negative or both positive, they are significant.  However, if the lower 95% value is negative and the upper 95% is positive (as is the case with Months Since Last Donation), then the parameter estimate is not significant, since its confidence interval range crosses zero.  Hence Months Since Last Donation is not significant.  Yet, the model still has a 93.3% coefficient of determination. Does this mean we can drop this variable from our regression? Not so fast!

Regression Violation Present!

Generally, when an independent variable we expect to be an important predictor of our dependent variable comes up as statistically insignificant, it is sometimes a sign of multicollinearity. And that is definitely the case with the nonprofit’s model. That will be our topic in next week’s Forecast Friday post.

Forecasting with the Output

Since we’re going to take on multicollinearity next week, let’s pretend our model is A-OK, and generate some forecasts.

We’ll go to our regression output worksheet, select cells A17 through B20, which contain our regression variables and coefficients, and then click Copy (or do a CTRL-C):

Selecting the Coefficients

Selecting the Coefficients

Next, let’s paste those coefficients and transpose them in another worksheet.  Here’s how to select the “Transpose” option when pasting:

Pasting Data Using the Transpose Option

Next, this is what the result of our transpose will be:

Transposed Data

Transposed Data

Now, the nonprofit organization looks at five prospective donors whom they are planning to solicit. They look at their past giving history as shown in the next thumbnail:

Prospective Donors - Before Applying Model

Knowing this information, we want to multiply those values by their respective coefficients. Take a look at the formula in cell F7 as we do just that, in the next thumbnail:

Forecasting with Regression Output

Note how the cell numbers containing the coefficients have their column letters enveloped in ‘$’. The dollar signs tell Excel that when we copy the formula down the next four rows, that it still reference those cells. Otherwise, for each row down, Excel would multiply each blank cell below the coefficients by the next donor’s information.  Here’s are the forecasts generated:

Forecasts made with Model

Next Forecast Friday Topic: Multicollinearity

Today you learned how to develop regression models using Excel and how to use Excel to interpret the output. You also found out that our model exhibited multicollinearity, a violation of one of the key regression assumptions. Next week and the week after, we will discuss multicollinearity in depth: how to detect it, how to correct it, and when to live with it. We will again be using the nonprofit’s model.  As I’ve said before, models are far from perfect and, as such, should only aid – not replace – the decision-making process.

 1 Note: Excel is a registered trademark of Microsoft Corporation. Use of Microsoft Excel in this post is intended only for a demonstration of how to use Excel for regression analysis and does not constitute an endorsement of Microsoft Excel or any other Microsoft product by Analysights, LLC.

*************************

If you Like Our Posts, Then “Like” Us on Facebook!

Analysights is now doing the social media thing! If you like Forecast Friday – or any of our other posts – then we want you to “Like” us on Facebook! By “Like-ing” us on Facebook, you’ll be informed every time a new blog post has been published, or when other information comes out. Check out our Facebook page!