Archive for May, 2010

Forecast Friday Topic: Simple Regression Analysis

May 27, 2010

(Sixth in a series)

Today, we begin our discussion of regression analysis as a time series forecasting tool. This discussion will take the next few weeks, as there is much behind it. As always, I will make sure everything is simplified and easy for you to digest. Regression is a powerful tool that can be very helpful for mid- and long-range forecasting. Quite often, the business decisions we make require us to consider relationships between two or more variables. Rarely can we make changes to our promotion, pricing, and/or product development strategies without them having an impact of some kind on our sales. Just how big an impact would that be? How do we measure the relationship between two or more variables? And does a real relationship even exist between those variables? Regression analysis helps us find out.

One thing I must point out: Remember the “deviations” we discussed in the posts on moving average and exponential smoothing techniques: The difference between the forecasted and actual values for each observation, of which we took the absolute value? Good. In regression analysis, we refer to the deviations as the “error terms” or “residuals.” In regression analysis, the residuals – which we will square, rather than take the absolute value – become very important in gauging the regression model’s accuracy, validity, efficiency, and “goodness of fit.”

Simple Linear Regression Analysis

Sue Stone, owner of Stone & Associates, looked at her CPA practice’s monthly receipts from January to December 2009. The sales were as follows:

 Month Sales January \$10,000 February \$11,000 March \$10,500 April \$11,500 May \$12,500 June \$12,000 July \$14,000 August \$13,000 September \$13,500 October \$15,000 November \$14,500 December \$15,500

Sue is trying to predict what sales will be for each month in the first quarter of 2010, but is unsure of how to go about it. Moving average and exponential smoothing techniques rarely go more than one period ahead. So, what is Sue to do?

When we are presented with a set of numbers, one of the ways we try to make sense of it is by taking its average. Perhaps Sue can average all 12 months’ sales – \$12,750 – and use that her forecast for each of next three months. But how accurately would that measure each month of 2009? How spread out are each month’s sales from the average? Sue subtracts the average from each month’s sales and examines the difference:

 Month Sales Sales Less Average Sales January \$10,000 -\$2,750 February \$11,000 -\$1,750 March \$10,500 -\$2,250 April \$11,500 -\$1,250 May \$12,500 -\$250 June \$12,000 -\$750 July \$14,000 \$1,250 August \$13,000 \$250 September \$13,500 \$750 October \$15,000 \$2,250 November \$14,500 \$1,750 December \$15,500 \$2,750

Sue notices that the error between actual and average is quite high in both the first four months of 2009 and in the last three months of 2009. She wants to understand the overall error in using the average as a forecast of sales. However, when she sums up all the errors from month to month, Sue finds they sum to zero. That tells her nothing. So she squares each month’s error value and sums them:

 Month Sales Error Error Squared January \$10,000 -\$2,750 \$7,562,500 February \$11,000 -\$1,750 \$3,062,500 March \$10,500 -\$2,250 \$5,062,500 April \$11,500 -\$1,250 \$1,562,500 May \$12,500 -\$250 \$62,500 June \$12,000 -\$750 \$562,500 July \$14,000 \$1,250 \$1,562,500 August \$13,000 \$250 \$62,500 September \$13,500 \$750 \$562,500 October \$15,000 \$2,250 \$5,062,500 November \$14,500 \$1,750 \$3,062,500 December \$15,500 \$2,750 \$7,562,500 Total Error: \$35,750,000

In totaling these squared errors, Sue derives the total sum of squares, or TSS error: 35,750,000. Is there any way she can improve upon that? Sue thinks for a while. She doesn’t know too much more about her 2009 sales except for the month in which they were generated. She plots the sales on a chart:

Sue notices that sales by month appear to be on an upward trend. Sue thinks for a moment. “All I know is the sales and the month,” she says to herself, “How can I develop a model to forecast accurately?” Sue reads about a statistical procedure called regression analysis and, seeing that each month’s sales is in sequential order, she wonders whether the mere passage of time simply causes sales to go higher. Sue numbers each month, with January assigned a 1 and December, a 12.

She also realizes that she is trying to predict sales with each passing month. Hence, she hypothesizes that the change in sales depends on the change in the month. Hence, sales is Sue’s dependent variable. Because the month number is used to estimate change in sales, it is her independent variable. In regression analysis, the relationship between an independent and a dependent value is expressed:

Y = α + βX + ε

Where: Y is the value of the dependent variable

X is the value of the independent variable

α is a population parameter, called the intercept, which would be the value of Y when X=0

β is also a population parameter – the slope of the regression line – representing the change in Y associated with each one-unit change in X.

ε is the error term.

Sue further reads that the goal of regression analysis is to minimize the error sum of squares, which is why it is referred to as ordinary least squares (OLS) regression. She also notices that she is building her regression on a sample, so there is a sample regression equation used to estimate what the true regression is for the population:

Essentially, the equation is the same as the one above, however the terms indicate the sample. The Y-term (called “Y hat”) is the sample forecasted value of the dependent variable (sales) at period i; a is the sample estimate of α; b is the sample estimate of β; Xi is the value of the independent variable at period i; and ei is the error, or difference between Y hat (the forecasted value) and actual Y for period i. Sue needs to find the values for a and b – the estimates of the population parameters – that minimize the error sum of squares.

Sue reads that the equations for estimating a and b are derived from calculus, but expressed algebraically as:

Sue learns that the X and Y terms with lines above them, known as “X bar” and “Y bar,” respectively are the averages of all the X and Y values, respectively. She also reads that the Σ notation – the Greek letter sigma – represents a sum. Hence, Sue realizes a few things:

1. She must estimate b before she can estimate a;
2. To estimate b,she must take care of the numerator:
1. first subtract each observation’s month number from the average month’s number (X minus X-bar),
2. subtract each observation’s sales from the average sales (Y minus Y-bar),
3. multiply those two together, and
4. Add up (2c) for all observations.
3. To get the denominator for calculating b, she must:
1. Again subtract X-bar from X, but then square the difference, for each observation.
2. Sum them up
4. Calculating b is easy: She needs only to divide the result from (2) by the result from (3).
5. Calculating a is also easy: She multiplies her b value by the average month (X-bar), and subtracts it from average sales (Y-bar).

Sue now goes to work to compute her regression equation. She goes into Excel and enters her monthly sales data in a table, and computes the averages for sales and month number:

 Month (X) Sales (Y) 1 \$10,000 2 \$11,000 3 \$10,500 4 \$11,500 5 \$12,500 6 \$12,000 7 \$14,000 8 \$13,000 9 \$13,500 10 \$15,000 11 \$14,500 12 \$15,500 Average 6.5 \$12,750

Sue goes ahead and subtracts the X and Y values from their respective averages, and computes the components she needs (the “Product” is the result of multiplying the values in the first two columns together):

 X minus X-bar Y minus Y-bar Product (X minus X-bar) Squared -5.5 -\$2,750 \$15,125 30.25 -4.5 -\$1,750 \$7,875 20.25 -3.5 -\$2,250 \$7,875 12.25 -2.5 -\$1,250 \$3,125 6.25 -1.5 -\$250 \$375 2.25 -0.5 -\$750 \$375 0.25 0.5 \$1,250 \$625 0.25 1.5 \$250 \$375 2.25 2.5 \$750 \$1,875 6.25 3.5 \$2,250 \$7,875 12.25 4.5 \$1,750 \$7,875 20.25 5.5 \$2,750 \$15,125 30.25 Total \$68,500 143

Sue computes b:

b = \$68,500/143

= \$479.02

Now that Sue knows b, she calculates a:

a = \$12,750 – \$479.02(6.5)

= \$12,750 – \$3,113.64

= \$9,636.36

Hence, assuming errors are zero, Sue’s least-squares regression equation is:

Y(hat) =\$9,636.36 + \$479.02X

Or, in business terminology:

Forecasted Sales = \$9,636.36 + \$479.02 * Month number.

This means that each passing month is associated with an average increase in sales of \$479.02 for Sue’s CPA firm. How accurately does this regression model predict sales? Sue estimates the error by plugging each month’s number into the equation and then comparing her forecast for that month with the actual sales:

 Month (X) Sales (Y) Forecasted Sales Error 1 \$10,000 \$10,115.38 -\$115.38 2 \$11,000 \$10,594.41 \$405.59 3 \$10,500 \$11,073.43 -\$573.43 4 \$11,500 \$11,552.45 -\$52.45 5 \$12,500 \$12,031.47 \$468.53 6 \$12,000 \$12,510.49 -\$510.49 7 \$14,000 \$12,989.51 \$1,010.49 8 \$13,000 \$13,468.53 -\$468.53 9 \$13,500 \$13,947.55 -\$447.55 10 \$15,000 \$14,426.57 \$573.43 11 \$14,500 \$14,905.59 -\$405.59 12 \$15,500 \$15,384.62 \$115.38

Sue’s actual and forecasted sales appear to be pretty close, except for her July estimate, which is off by a little over \$1,000. But does her model predict better than if she simply used average sales as her forecast for each month? To do that, she must compute the error sum of squares, ESS, error. Sue must square the error terms for each observation and sum them up to obtain ESS:

ESS = Σe2

 Error Squared Error -\$115.38 \$13,313.61 \$405.59 \$164,506.82 -\$573.43 \$328,818.04 -\$52.45 \$2,750.75 \$468.53 \$219,521.74 -\$510.49 \$260,599.54 \$1,010.49 \$1,021,089.05 -\$468.53 \$219,521.74 -\$447.55 \$200,303.19 \$573.43 \$328,818.04 -\$405.59 \$164,506.82 \$115.38 \$13,313.61 ESS= \$2,937,062.94

Notice Sue’s error sum of squares. This is the error, or unexplained, sum of squared deviations between the forecasted and actual sales. The difference between the total sum of squares (TSS) and the Error Sum of Squares (ESS) is the regression sum of squares, RSS, and that is the sum of squared deviations that are explained by the regression. RSS is also calculated as each forecasted value of sales less the average of sales:

 Forecasted Sales Average Sales Regression Error Reg. Error Squared \$10,115.38 \$12,750 -\$2,634.62 \$6,941,198.22 \$10,594.41 \$12,750 -\$2,155.59 \$4,646,587.24 \$11,073.43 \$12,750 -\$1,676.57 \$2,810,898.45 \$11,552.45 \$12,750 -\$1,197.55 \$1,434,131.86 \$12,031.47 \$12,750 -\$718.53 \$516,287.47 \$12,510.49 \$12,750 -\$239.51 \$57,365.27 \$12,989.51 \$12,750 \$239.51 \$57,365.27 \$13,468.53 \$12,750 \$718.53 \$516,287.47 \$13,947.55 \$12,750 \$1,197.55 \$1,434,131.86 \$14,426.57 \$12,750 \$1,676.57 \$2,810,898.45 \$14,905.59 \$12,750 \$2,155.59 \$4,646,587.24 \$15,384.62 \$12,750 \$2,634.62 \$6,941,198.22 RSS= \$32,812,937.06

Sue immediately adds the RSS and the ESS and sees they match the TSS: \$35,750,000. She also knows that nearly 33 million of that TSS is explained by her regression model, so she divides her RSS by the TSS:

32,812,937.06 / 35,750,000

=.917 or 91.7%

This quotient, known as the coefficient of determination, and denoted as R2, tells Sue that each passing month explains 91.7% of the change in monthly sales that she experiences. What R2 means is that Sue improved her forecast accuracy by 91.7% by using this simple model instead of the simple average. As you will find out in subsequent blog posts, maximizing R2 isn’t the “be all and end all”. In fact, there is still much to do with this model, which will be discussed in next week’s Forecast Friday post. But for now, Sue’s model seems to have reduced a great deal of error.

It is important to note that while each month does seem to be related to sales, the passing months do not cause the increase in sales. Correlation does not mean causation. There could be something behind the scenes (e.g., Sue’s advertising, or the types of projects she works on, etc.) that is driving the upward trend in her sales.

Using the Regression Equation to Forecast Sales

Now Sue can use the same model to forecast sales for January 2010 and February 2010, etc. She has her equation, so since January 2010 is period 13, she plugs in 13 for X, and gets a forecast of \$15,863.64; for February (period 14), she gets \$16,342.66.

Recap and Plan for Next Week

You have now learned the basics of simple regression analysis. You have learned how to estimate the parameters for the regression equation, how to measure the improvement in accuracy from the regression model, and how to generate forecasts. Next week, we will be checking the validity of Sue’s equation, and discussing the important assumptions underlying regression analysis. Until then, you have a basic overview of what regression analysis is.

Advertisements

Charities are Spying on You – But That’s Not Necessarily a Bad Thing!

May 26, 2010

The June 2010 issue of SmartMoney magazine contained an interesting article, “Are Charities Spying On You?,” which discussed the different ways nonprofit organizations are trying to find out information – available from public sources – on current and prospective donors. As one who has worked in the field of data mining and predictive analytics, I found the article interesting in large part because of how well the nonprofit sector has made use of these very techniques in designing their campaigns, solicitations, and programming.

At first glance, it can seem frightening what charities can learn about you. For instance, the article mentions how some charities’ prospect-research departments look at LinkedIn profiles, survey your salary history, and even use satellite images to get information on the home in which you live. And there is a wealth of information out there about us: Zillow.com gives info about the value of our homes and those around it; if you write articles or letters to the editor of your newspaper, online versions can often be found on Google; buy or sell any real estate? That too gets published in the online version of the newspaper; and online bridal and baby shower registries, graduation and wedding announcements, and any other news are fair game. And your shopping history! If you buy online or through a catalog, your name ends up on mailing lists that charities buy. Face it, there’s a lot of information about us that is widely and publicly available.

But is this so terrible? For the most part, I don’t think so. Surely, it’s bad if that information is being used against you. But think of the ways this data mining proves beneficial:

Customization

Let’s assume that you and I are both donors to the Republican National Committee. That suggests we’re both politically active and politically conservative. But are we engaged with the RNC in the same way? Most likely not. You might have donated to the RNC because you’re a wealthy individual who values low taxes and opposes a national health care plan; I might have donated because I am a social conservative who wants prayer in public schools, favors school choice, and opposes abortion. By seeking out information on us, the RNC can tailor its communications in a manner that speaks to each of us individually, sending you information about how it’s fighting proposed tax hikes in various states, and sending me information about school choice initiatives. In this way, the RNC maintains its relevance to each of us.

In addition, it’s very likely, in this example, that you’re donating a lot more money to the RNC than I am. Hence, that would likely lead the RNC to offer you special perks, such as free passes for you and a guest to meet various candidates or attend special luncheons or events. For me, I might at best be given an autographed photo of the event – in exchange for a donation of course – or an invite to the same events, but with a donation of a lot of money requested. I might get information about when the next Tea Party rally in my area will be held. Or even a brief newsletter. One can argue that the treatment you’re getting vs. that of what I’m getting is unfair. However, think of it like this: at a casino, people who gamble regularly and heavily are given all sorts of complimentary perks: drinks, food, a host to attend to their needs, and even special reduced rate stays. That’s because these gamblers are making so much money for the casino, that the cost of these “comps” is small in comparison. In addition, the casino wants to make it more fun for these gamblers to lose money, so that they’ll keep on playing. In short, the special treatment you’re getting is something you’re paying for, if indirectly. I’m getting less because I’m giving less; you’re getting more because you’re giving more. And the charity will give you more to keep you giving more!

Reduced Waste

Before direct marketing got so sophisticated, mass marketing was the only tactic. If you had a product to sell, you sent the same solicitation to thousands, if not millions of people and hoped for a 1-2% response rate. Most people simply threw your solicitation in the garbage when it came in the mail. Many recipients didn’t have a need for the item you were selling or the appeal for which you were soliciting, and disregarded your piece. As a result, lots of paper was wasted, and the phrase “junk mail” came into existence. In addition, if you used follow-up methods, such as phone calls after the mailing, that got costly trying to qualify the leads, just because of the labor involved.

Now, with targeted marketing and list rental, sales, and sharing, charities can build predictive models that estimate each current and prospective donor’s likelihood of responding to a promotion. As a result, the charity doesn’t need to send out quite a large mailing; it can mail solely to those with the best chance of responding, reducing the amount of paper, print, and postage involved, not to mention reduced labor costs involved, both in the production of the piece and in the staffing of the outbound call center. In short, the charity’s data mining is helping the environment, reducing overhead, and increasing the top and bottom lines.

Better Programming

By knowing more about you, the charity can know what makes you “tick,” so that it can come up with programs that fit your needs. Even if you’re not a large donor, if you and other donors feel strongly about certain issues, or value certain programs, the charity can develop programs that are suitable to its members at large. And while many larger donors may be granted special privileges, their large donations can help fund the programs of those who donate less. Everybody wins.

Not bad at all

The data mining tactics charities use aren’t bad. People don’t want to be bombarded with solicitations for which they see no value in it for themselves. Data mining makes it very possible to give you an offer that is relevant to your situation, is cost-effective and resource-efficient, and design programs from which you’re likely to benefit. It is important to note, that while major donors get several great perks, charities must not ignore those whose donations are smaller, for two reasons: first, they have the potential to become major donors, and second, because of their smaller donations, it’s very likely their frequency of giving is greater. This can mean a great stream of gifts to the charity over time. Hence, charities should do things that show these donors they’re appreciated – and, quite often, this too is often accomplished by data mining.

We welcome replies to our blog post!

Forecast Friday Topic: Double Exponential Smoothing

May 20, 2010

(Fifth in a series)

We pick up on our discussion of exponential smoothing methods, focusing today on double exponential smoothing. Single exponential smoothing, which we discussed in detail last week, is ideal when your time series is free of seasonal or trend components, which create patterns that your smoothing equation would miss due to lags. Single exponential smoothing produces forecasts that exceed actual results when the time series exhibits a decreasing linear trend, and forecasts that trail actual results when the time series exhibits an increasing trend. Double exponential smoothing takes care of this problem.

Two Smoothing Constants, Three Equations

Recall the equation for single exponential smoothing:

Ŷt+1 = αYt + (1-α) Ŷt

Where: Ŷt+1 represents the forecast value for period t + 1

Yt is the actual value of the current period, t

Ŷt is the forecast value for the current period, t

and α is the smoothing constant, or alpha, 0≤ α≤ 1

To account for a trend component in the time series, double exponential smoothing incorporates a second smoothing constant, beta, or β. Now, three equations must be used to create a forecast: one to smooth the time series, one to smooth the trend, and one to combine the two equations to arrive at the forecast:

Ct = αYt + (1-α)(Ct-1 + T t-1)

Tt = β(Ct – Ct-1) + (1 – β)T t-1

Ŷt+1 = Ct + Tt

All symbols appearing in the single exponential smoothing equation represent the same in the double exponential smoothing equation, but now β is the trend-smoothing constant (whereas α is the smoothing constant for a stationary – constant – process) also between 0 and 1; Ct is the smoothed constant process value for period t; and Tt is the smoothed trend value for period t.

As with single exponential smoothing, you must select starting values for Ct and Tt, as well as values for α and β. Recall that these processes are judgmental, and constants closer to a value of 1.0 are chosen when less smoothing is desired (and more weight placed on recent values) and constants closer to 0.0 when more smoothing is desired (and less weight placed on recent values).

An Example

Let’s assume you’ve got 12 months of sales data, shown in the table below:

 Month t Sales Yt 1 152 2 176 3 160 4 192 5 220 6 272 7 256 8 280 9 300 10 280 11 312 12 328

You want to see if there is any discernable trend, so you plot your sales on the chart below:

The time series exhibits an increasing trend. Hence, you must use double exponential smoothing. You must first select your initial values for C and T. One way to do that is to again assume that the first value is equal to its forecast. Using that as the starting point, you set C2 = Y1, or 152. Then you subtract Y1 from Y2 to get T2: T2 = Y2 – Y1 = 24. Hence, at the end of period 2, your forecast for period 3 is 176 (Ŷ3 = 152 + 24).

Now you need to choose α and β. For the purposes of this example, we will choose an α of 0.20 and a β of 0.30. Actual sales in period 3 were 160, and our constant-smoothing equation is:

C3 = 0.20(160) + (1 – 0.20)(152 + 24)

= 32 + 0.80(176)

= 32 + 140.8

= 172.8

Next, we compute the trend value with our trend-smoothing equation:

T3 = 0.30(172.8 – 152) + (1 – 0.30)(24)

= 0.30(20.8) + 0.70(24)

= 6.24 + 16.8

=23.04

Hence, our forecast for period 4 is:

Ŷ4 = 172.8 + 23.04

= 195.84

Then, carrying out your forecasts for the 12-month period, you get the following table:

 Alpha= 0.2 Beta= 0.3 Month t Sales Yt Ct Tt Ŷt Absolute Deviation 1 152 2 176 152.00 24.00 152.00 3 160 172.80 23.04 176.00 16.00 4 192 195.07 22.81 195.84 3.84 5 220 218.31 22.94 217.88 2.12 6 272 247.39 24.78 241.24 30.76 7 256 268.94 23.81 272.18 16.18 8 280 290.20 23.05 292.75 12.75 9 300 310.60 22.25 313.25 13.25 10 280 322.28 19.08 332.85 52.85 11 312 335.49 17.32 341.36 29.36 12 328 347.85 15.83 352.81 24.81 MAD= 20.19

Notice a couple of things: the absolute deviation is the absolute value of the difference between Yt (shown in lavender) and Ŷt (shown in light blue). Note also that beginning with period 3, Ŷ3 is really the sum of C and T computed in period 2. That’s because period 3’s constant and trend forecasts were generated at the end of period 2 – and onward until period 12. Mean Absolute Deviation has been computed for you. As with our explanation of single exponential smoothing, you need to experiment with the smoothing constants to find a balance that most accurate forecast at the lowest possible MAD.

Now, we need to forecast for period 13. That’s easy. Add C12 and T12:

Ŷ13 = 347.85 + 15.83

= 363.68

And, your chart comparing actual vs. forecasted sales is:

As with single exponential smoothing, you see that your forecasted curve is smoother than your actual curve. Notice also how small the gaps are between the actual and forecasted curves. The fit’s not bad.

Exponential Smoothing Recap

Now let’s recap our discussion on exponential smoothing:

1. Exponential smoothing methods are recursive, that is, they rely on all observations in the time series. The weight on each observation diminishes exponentially the more distant in the past it is.
2. Smoothing constants are used to assign weights – between 0 and 1 – to the most recent observations. The closer the constant is to 0, the more smoothing that occurs and the lighter the weight assigned to the most recent observation; the closer the constant is to 1, the less smoothing that occurs and the heavier the weight assigned to the most recent observation.
3. When no discernable trend is exhibited in the data, single exponential smoothing is appropriate; when a trend is present in the time series, double exponential smoothing is necessary.
4. Exponential smoothing methods require you to generate starting forecasts for the first period in the time series. Deciding on those initial forecasts, as well as on the values of your smoothing constants – alpha and beta – are arbitrary. You need to base your judgments on your experience in the business, as well as some experimentation.
5. Exponential smoothing models do not forecast well when the time series pattern (e.g., level of sales) is suddenly, drastically, and permanently altered by some event or change of course or action. In these instances, a new model will be necessary.
6. Exponential smoothing methods are best used for short-term forecasting.

Next Week’s Forecast Friday Topic: Regression Analysis (Our Series within the Series!)

Next week, we begin a multi-week discussion of regression analysis. We will be setting up the next few weeks with a discussion of the principles of ordinary least squares regression (OLS), and then discussions of its use as a time-series forecasting approach, and later as a causal/econometric approach. During the course of the next few Forecast Fridays, we will discuss the issues that occur with regression: specification bias, autocorrelation, heteroscedasticity, and multicollinearity, to name a few. There will be some discussions on how to detect – and correct – these violations. Once the regression analysis miniseries is complete, we will be set up to discuss ARMA and ARIMA models, which will be written by guest bloggers who are well-experienced in those approaches. We know you’ll be very pleased with the weeks ahead!

Still don’t know why our Forecast Friday posts appear on Thursday? Find out at:

New York Life: How Traditional Approach Made for Great Marketing

May 19, 2010

This week, I got the May 24 issue of Fortune Magazine and skipped to this issue’s profile of one of the “World’s Most Admired Companies.” This time it was New York Life, the nation’s largest mutual life insurer. As I read the article, I was pretty intrigued by the company’s operation: very conservative. While New York Life is owned by policyholders, it didn’t follow the lead of other major insurers to invest aggressively for the sake of paying generous dividends. And the insurer chose to remain neutral in a price war on some lines of insurance, even though that meant losing some business in 2008. New York Life also invests in its own captive sales force – 12,000 agents strong – a practice so cost prohibitive to many publicly-traded insurers that they’re forced to rely on a network of banks, independent agents, and broker-dealers to push their insurance.

Fewer and fewer of us want to be viewed as traditional or passé, so one would think that New York Life’s conservative approach would have cost it a great deal of business. And in the go-go years, that seemed to be the case. But now, two years after a near meltdown in financial services, New York Life appears to have been vindicated: it had a record \$15 billion surplus of cash in 2009; it has continued to pay policyholders dividends for the 156th year, and had an increase of 40,000 policies sold in 2009. Even better, it didn’t have to raise premium rates like many of its price war competitors.

Just look at the effective marketing system New York Life has built for itself. Recall the components of the marketing mix: product, price, position, promotion, and distribution. It’s easy to discern from the article that New York Life got all of these components right. While New York Life also sells mutual funds, long-term care insurance, and annuities, it has neither forgotten nor abandoned its core product: life insurance. In fact, the company still emphasizes it as an important part of a family’s protection. Because of its traditional investment style, New York Life’s pricing is competitive. In terms of promotion, New York Life turned its traditional operation into a distinct advantage, boosting ad spending by 24% and trumpeting how its conservative style was appropriate for these economic times. Distribution is handled through by New York Life’s own captive agent force – the only agents for New York Life, all New York Life, and nothing but New York Life. Every New York Life agent I’ve met knows its products backward and forward, and knows quickly which ones are most ideal for prospective and existing customers. Now positioning… New York Life apparently could market itself as the kind of insurance company that gives its policyholders great peace of mind. Policyholders can sleep at night knowing dividends will be paid consistently, premiums will remain stable, that they have the right insurance, and that the company will be around to pay out when they need to make a claim.

I am not a New York Life policyholder. I came very close a couple of years ago, but another company had a policy that was better suited to my needs. And I found it hard to reject the New York Life agent who had been working with me to find the right policy for me. But when my insurance needs change, New York Life is on my short list, a further testament of its marketing success: make a great impression on a prospective customer so that if he/she doesn’t buy now, there’s a good chance he/she will do so in the future.