Posts Tagged ‘Marketing Analytics’

Read All About It: Why Newspapers Need Marketing Analytics

October 26, 2010

After nearly 20 years, I decided to let my subscription to the Wall Street Journal lapse. A few months ago, I did likewise with my longtime subscription to the Chicago Tribune. I didn’t want to end my subscriptions, but as a customer, I felt my voice wasn’t being heard.

Some marketing research and predictive modeling might have enabled the Journal and the Tribune to keep me from defecting. From these efforts, both publications could have spotted my increasing frustration and dissatisfaction and intervened before I chose to vote with my feet.

Long story short, I let both subscriptions lapse for the same reason: chronic unreliable delivery, which was allowed to fester for many years despite numerous calls by me to their customer service numbers about missing and late deliveries.

Marketing Research

Both newspapers could have used marketing research to alert them to the likelihood that I would not renew my subscriptions. They each had lots of primary research readily available to them, without needing to do any surveys: my frequent calls to their customer service department, with the same complaint.

Imagine the wealth of insights both papers could have reaped from this data: they could determine the most common breaches of customer service; by looking at the number of times customers complained about the same issue, they could determine where problems were left unresolved; by breaking down the most frequent complaints by geography, they could determine whether additional delivery persons needed to be hired, or if more training was necessary; and most of all, both newspapers could have also found their most frequent complainers, and reached out to them to see what could be improved.

Both newspapers could have also conducted regular customer satisfaction surveys of their subscribers, asking about overall satisfaction and likelihood of renewing, followed by questions about subscribers’ perceptions about delivery service, quality of reporting, etc. The surveys could have helped the Journal and the Tribune grab the low-hanging fruit by identifying the key elements of service delivery that have the strongest impact on subscriber satisfaction and likelihood of renewal, and then coming up with a strategy to secure satisfaction with those elements.

Predictive Modeling

Another way both newspapers might have been able to intervene and retain my business would have been to predict my likelihood of lapse. This so-called attrition or “churn” modeling is common in industries whose customers are continuity-focused: newspapers and magazines, credit cards, membership associations, health clubs, banks, wireless communications, and broadband cable to name a few.

Attrition modeling (which, incidentally, will be discussed in the next two upcoming Forecast Friday posts) involves developing statistical models comparing attributes and characteristics of current customers with those of former, or churned, customers. The dependent variable being measured is whether a customer churned, so it would be a 1 if “yes” and a 0 if “no.”

Essentially, in building the model, the newspapers would look at several independent, or predictor, variables: customer demographics (e.g., age, income, gender, etc.), frequency of complaints, geography, to name a few. The model would then identify the variables that are the strongest predictors of whether a subscriber will not renew. The model will generate a score between 0 and 1, indicating each subscriber’s probability of not renewing. For example, a probability score of .72 indicates that there is a 72% chance a subscriber will let his/her subscription lapse, and that the newspaper may want to intervene.

In my case, both newspapers might have run such an attrition model to see if number of complaints in the last 12 months was a strong predictor of whether a subscriber would lapse. If that were the case, I would have a high probability of churn, and they could then call me; or, if they found that subscribers who churned were clustered in a particular area, they might be able to look for systemic breakdowns in customer service in that area. Either way, both papers could have found a way to salvage the subscriber relationship.


Advertisements

Why Surveys Go Well With Predictive Models

October 13, 2010

Thanks to advancements in technology, companies now have the capability to analyze millions – if not billions – of transactional, demographic, and psychographic records in a short time and develop sophisticated models that can assess several scenarios: how likely a customer is likely to purchase again; when he/she will purchase again; how much he/she will spend in the next year; how likely he/she will defect; and many more. Yet, by themselves, predictive models don’t provide a complete picture or profile of the customer. While models can provide information on a prospect or customer’s willingness and ability to purchase based on similar characteristics of current customers, they don’t provide much information about the customer or prospect’s readiness to buy. Hence, a survey can be a highly useful supplement.

Using a survey before a promotion – assuming no effort is made trying to sell to the customer under the guise of the survey – can provide valuable information. With a simple attitudinal and behavioral survey, a marketer can gain a read on the market’s readiness and willingness to buy at that moment. Moreover, the marketer can gauge the purchase readiness of certain customer groups and segments, so that he/she can structure marketing promotions in a manner that makes the best use of marketing dollars. In addition, if certain groups are wary of or unwilling to buy a product, the marketer can look for ways to reach out to these groups for the future.

Another benefit of surveys is to help classify customers and prospects into market segments based on their answers to carefully designed questions. Often, surveys can capture data about prospects and customers that transactional and third-party overlay data sources cannot.

Surprisingly, many companies either do marketing research or predictive modeling, but not both. This is squandering a great marketing opportunity. These two approaches together can provide the missing pieces to the puzzle that will help marketers improve their planning, increase their marketing ROI, and maximize their profits and market share.

Data Mining Meets Online Dating

September 28, 2010

The September 27 issue of Fortune Magazine had two stories in it that pertain to data mining and predictive modeling.  One of them, eHarmony’s Algorithm of Love,  is an interesting account of how eHarmony is using predictive analytics tools to maximize the likelihood of a couple being a good match.  Since the article is brief, any commentary I might add – other than “it drives home the points I’ve been making,” will simply parrot the artice.  So I thought I’d let you click on the link and enjoy!

The Man Who Feared Analytics

June 9, 2010

A business owner had once been referred to me by a colleague with whom he had already been doing business. For many years, the businessman’s photography business had been sustained through direct mail advertising, and he often received a 5%-7% response rate, an accomplishment that would boggle most direct marketers. But the recent economic downturn combined with photography’s being a discretionary expense, he soon found his direct mail solicitations bringing in a puny 0.8% response rate. The business owner had a great product, a great price, and a great offer, but at that response rate, he was no longer breaking even.

My colleague and I spoke with the businessman about his dilemma. We talked through his business; we looked at his most recent mailer, learned how he obtained his mailing lists, and discussed his promotion schedule. We found that the photographer would buy a list of names, mail them once, and then use a different list, not giving people enough opportunity to develop awareness of his business. We also found that he didn’t have much information about the people he was mailing.

We recommended that analytics could help the photographer maximize his margin by improving both the top and bottom line. Analytics would first help him understand which customers were responding to his mailings. Then he could purchase lists of people with characteristics similar to those past respondents. His response rate would go up, since he would be sending to a list of people most receptive to his photography. He would also be able to mail fewer people, cutting out those with little likelihood of response. He could then use the savings to remail the members of his target segments who hadn’t responded to his earlier mailing, and thus increase their awareness. It all sounded good to the photographer.

And then, he decided he was going to wait to see if things got better!

Why the Fear of Analytics?

The photographer’s decision is a common refrain of marketers. Marketers and business owners who are introduced to analytics are like riders on a roller coaster: thrilled and nervous at the same time. While marketers are excited about the benefits of analytics, they are also concerned about its cost; they’re afraid of change; and they’re intimidated by the perceived complexity of analytics. We’ll tackle each of these fears here.

FEAR #1: Analytics could be expensive.

REALITY: Analytics is an investment that pays for itself.

The cost of analytics can appear staggering, especially in lean times. Some of the most sophisticated analytics techniques can run into tens – if not hundreds – of thousands of dollars for a large corporation. However, for many smaller companies, analytics can run a few thousand dollars, but still a lot of money. But analytics is not an expense; you are getting something great in return: the insights you need to make better informed marketing decisions and identify the areas in your marketing that you can improve or enhance; the ability to target customers and prospects more effectively, resulting in increased sales and reduced costs; and the chance to establish long-term continuous improvement systems.

Had the photographer gone through with the analytics for his upcoming mail, the entire analysis would have cost him somewhere between $1,300 and $1,800. But that fee would have enabled him to identify where his mailings were getting the greatest bang for his buck and he might have made up for it in reduced mailing costs and increased revenues. Once the analytics had saved or made the photographer at least $1,800, it would have paid for itself.

FEAR #2: Analytics means a change in the way we do things.

REALITY: Analytics brings about change gradually and seamlessly.
The photographer had been using direct mail over and over again, because it worked over and over again – until recently. In fact, having lost so much money on his recent direct mails, he’s probably leery of new approaches, so he stays the course out of familiarity. That’s quite common. But this is the nice part about analytics: change can be gradual! Analytics is about testing the waters, so to reduce risk. Perhaps the photographer could have done a test where half of his mailings were executed the traditional way, and half done the way the analytics recommended. Over the course of a short period, the photographer could then decide for himself what approach was working best.

FEAR #3: Analytics is “over my head.”
REALITY: You need only understand a few high level concepts.

Those complicated and busy mathematical formulas, in all their Greek symbol glory, can be intimidating to people who are not mathematicians, statisticians, or economists. In fact, even I get intimidated from those equations. We must remember, however, that these formulas were developed to improve the way we do things! With analytics, all you need to know is what approach was employed, what it does, why it’s important, and how to apply it – all of which are very simple. Analysts like me deal with all the complicated stuff – finding the approach, employing it, debugging it, refining it, and then packaging it in a way that you can apply seamlessly. And if you don’t understand something about the analytical approach employed, by all means, ask! And any good analyst will give you all the guidance you need until you’re able to apply the analytics on your own.

Forgoing Analytics Can Cost Your Business Three Times Over!

Analytics is one of those tools that many marketers know can enhance their businesses, yet decide to hold off on – either for cost, perceived complexity, or just plain fear. This inaction can be very dangerous. Analytics is not just a tool that improves your business decision making; it also helps you diagnose problems, identify opportunities, and make predictions about the future. Failure to do these properly costs you in three ways. First, you market incorrectly, wasting money. Second, you market to the wrong people; they don’t buy, and you lose revenue you could have made marketing to the right people. Third, you fail to recognize opportunities, and you forgo any sales those missed opportunities may have brought. Analytics is an investment that pays for itself, pays dividends down the road, brings about change in an easy and acceptable way, and whose benefits are easy to grasp and financially rewarding.

Using Statistics to Evaluate a Promotion

May 25, 2010

Marketing – as much as cashflow – is the lifeblood of any business. No matter how good your product or service may be, it’s worthless if you can’t get it in front of your customers and get them to buy it. So all businesses, large and small, must engage in marketing. And we see countless types of marketing promotions or tactics being tried: radio and TV commercials, magazine and newspaper advertisements, public relations, coupons, email blasts, and so forth. But are our promotions working? The merchant John Wannamaker, often dubbed the father of modern advertising is said to have remarked, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.”

Some basic statistics can help you evaluate the effectiveness of your marketing and take away much of the mystique Wannamaker complained about. When deciding whether to do a promotion, managers and business owners have no way of knowing whether it will succeed; in fact, in today’s economy, budgets are still tight. The cost to roll out a full promotion can wipe out an entire marketing budget if it proves to be a fiasco. This is why many businesses do a test before doing a complete rollout. The testing helps to reduce the amount of uncertainty involved in an all-out campaign.

Quite often, large companies need to choose between two or more competing campaigns for rollout. But how do they know which will be effective? Consider the example of Jenny Kaplan, owner of K-Jen, a New Orleans-style restaurant. K-Jen serves up a tasty jambalaya entrée, which is priced at $10.00. Jenny believes that the jambalaya is a draw to the restaurant and believes that by offering a discount, she can increase the average amount of the table check. Jenny decides to issue coupons via email to patrons who have opted-in to receive such promotions. She wants to knock a dollar off the price of the jambalaya as the offer, but doesn’t know whether customers would respond better to an offer worded as “$1.00 off” or as “10% off.” So, Jenny decides to test the two concepts.

Jenny goes to her database of nearly 1,000 patrons and randomly selects 200 patrons. She decides to send half of those a coupon for $1.00 off for jambalaya, and the other half a coupon for 10% off. When the coupon offer expires 10 days later, Jenny finds that 10 coupons were redeemed for each offer – a redemption rate of 10% each. Jenny observes that either wording will get the same number of people to respond. But she wonders which offer generated the largest table check. So she looks at the guest checks to which the coupons were stapled. She notices the following:

Guest Check Amounts

 

Offer

 
 

$1.00 off

10% Off

 
 

$38.85

$50.16

 
 

$36.97

$54.44

 
 

$35.94

$32.20

 
 

$54.17

$32.69

 
 

$68.18

$51.09

 
 

$49.47

$46.18

 
 

$51.39

$57.72

 
 

$32.72

$44.30

 
 

$22.59

$59.29

 
 

$24.13

$22.94

 

 

Jenny quickly computes the average for each offer. The “$1.00 off” coupon generated an average table check of $41.44; the “10% off” coupon generated an average of $45.10. At first glance, it appears that the 10% off promotion generated a higher guest check. But is that difference meaningful, or is it due to chance? Jenny needs to do further analysis.

Hypothesis Testing

How does Jenny determine if the 10% off coupon really did better than the $1.00 off coupon? She can use statistical hypothesis testing, which is a structured analytical method for comparing the difference between two groups – in this case, two promotions. Jenny starts her analysis by formulating two hypotheses: a null hypothesis, which states that there is no difference in the average check amount for either offer; and an alternative hypothesis, which states that there is, in fact, a difference in the average check amount between the two offers. The null hypothesis is often denoted as H0, and the alternative hypothesis is denoted as HA. Jenny also refers to the $1.00 off offer as Offer #1, and the 10% off offer as Offer #2. She wants to compare the means of the two offers, the means of which are denoted as μ1 and μ2, respectively. Jenny writes down her two hypotheses:

H0: The average guest check amount for the two offers is equal.

HA: The average guest check amount for the two offers is not equal.

Or, more succinctly:

H0: μ12

HA: μ1≠μ2

 

Now, Jenny is ready to go to work. Note that the symbol μ denotes the population she wants to measure. Because Jenny did her test on a portion – a sample – of her database, the averages she computed were the sample average, which is denoted as . As we stated earlier, the average table checks for the “$1.00 off” and “10% off” offers were 1=$41.44 and 2=$45.10, respectively. Jenny needs to approximate μ using . She must also compute the sample standard deviation, or s for each offer.

Computing the Sample Standard Deviation

To compute the sample standard deviation, Jenny must subtract the mean of a particular offer from each of its check amounts in the sample; square the difference; sum them up; divide by the total observations minus 1(9) and then take the square root:

$1.00 Off

Actual Table Check

Average Table Check

Difference

Difference Squared

$38.85

$41.44

-$2.59

$6.71

$36.97

$41.44

-$4.47

$19.99

$35.94

$41.44

-$5.50

$30.26

$54.17

$41.44

$12.73

$162.03

$68.18

$41.44

$26.74

$714.97

$49.47

$41.44

$8.03

$64.46

$51.39

$41.44

$9.95

$98.98

$32.72

$41.44

-$8.72

$76.06

$22.59

$41.44

-$18.85

$355.36

$24.13

$41.44

-$17.31

$299.67

   

Total

$1,828.50

   

S21=

$203.17

   

S1=

$14.25

 

10% Off

Actual Table Check

Average Table Check

Difference

Difference Squared

$50.16

$45.10

$5.06

$25.59

$54.44

$45.10

$9.34

$87.22

$32.20

$45.10

-$12.90

$166.44

$32.69

$45.10

-$12.41

$154.03

$51.09

$45.10

$5.99

$35.87

$46.18

$45.10

$1.08

$1.16

$57.72

$45.10

$12.62

$159.24

$44.30

$45.10

-$0.80

$0.64

$59.29

$45.10

$14.19

$201.33

$22.94

$45.10

-$22.16

$491.11

   

Total

$1,322.63

   

S22=

$146.96

   

S2=

$12.12

 

Notice the denotation of S2. That is known as the variance. The variance and the standard deviation are used to measure the average distance between each data point and the mean. When data are normally distributed, about 95% of all observations fall within two standard deviations from the mean (actually 1.96 standard deviations). Hence, approximately 95% of the guest checks for the $1.00 off offer should fall between $41.44 ± 1.96*($14.25) or between $13.51 and $69.37. All ten fall within this range. For the 10% off offer, about 95% will fall between $45.10 ± 1.96*($12.12), or between $21.34 and $68.86. All 10 observations also fall within this range.

Degrees of Freedom and Pooled Standard Deviation

Jenny noticed two things immediately: first, that the 10% off coupon has the higher sample average, and second each individual table check is closer to it mean than it is for the $1.00 off coupon. Also notice that when we were computing the sample standard deviation for each offer, Jenny divided by 9, and not 10. Why? Because she was making estimates of the population standard deviation. Since samples are subject to error, we must account for that. Each observation gives us information into the population’s actual values. However, Jenny had to make an estimate based on that sample, so she gives up one observation to account for the sampling error – that is, she lost a degree of freedom. In this example, Jenny has 20 total observations; since she estimated the population standard deviation for both offers, she lost two degrees of freedom, leaving her with 18 (10 + 10 – 2).

Knowing the remaining degrees of freedom, Jenny must pool the standard deviations, weighting them by their degrees of freedom. This would be especially evident if the sample sizes of the two offers were not equal. The pooled standard deviation is given by:

FYI – n is simply the sample size. Jenny then computes the pooled standard deviation:

S2p = ((9 * $203.17) + (9 * $146.96))) / (10 + 10 – 2)

= ($1,828.53 + $1,322.64)/18

= $3,151.17/18

= $175.07

Now take the square root: $13.23

Hence, the pooled standard deviation is $13.23

Computing the t-Test Statistic

Now the fun begins. Jenny knows the sample mean of the two offers; she knows the hypothesized difference between the two population means (which we would expect to be zero, if the null hypothesis said they were equal); she knows the pooled standard deviation; she knows the sample size; and she knows the degrees of freedom. Jenny must now calculate the t-Test statistic. The t-Test Statistic, or the t-value, represents the number of estimated standard errors the sample average is from that of the population. The t-value is computed as follows:

 

So Jenny sets to work computing her t-Test Statistic:

t = (($41.44 – $45.10) – (0)) / ($13.23) * SQRT(1/10 + 1/10)

= -$3.66 / ($13.23 * SQRT(1/5))

=-$3.66 / ($13.23 * .45)

=-$3.66/$5.92

= -0.62

This t-statistic gives Jenny a basis for testing her hypothesis. Jenny’s t-statistic indicates that the difference in sample table checks between the two offers is 0.62 standard errors below the hypothesized difference of zero. We now need to determine the critical t – the value that we get from a t-distribution table that is available in most statistics textbooks and online. Since we are estimating with a 95% confidence interval, and since we must account for a small sample, our critical t-value is adjusted slightly from the 1.96 standard deviations from the mean. For 18 degrees of freedom, our critical t is 2.10. The larger the sample size, the closer to 1.96 the critical t would be.

So, does Jenny Accept or Reject her Null Hypothesis (Translation: Is the “10% Off” Offer Better than the “$1.00 Off” Offer)?

Jenny now has all the information she needs to determine whether one offer worked better than the other. What does the critical t of 2.10 mean? If Jenny’s t-statistic is greater than 2.10, or (since one offer can be lower than the other), less than -2.10, then she would reject her null hypothesis, as there is sufficient evidence to suggest that the two means are not equal. Is that the case?

Jenny’s t-statistic is -0.62, which is between -2.10 and 2.10. Hence, it is within the parameters. Jenny should not reject H0, since there is not enough evidence to suggest that one offer was better than the other at generating higher table checks. In fact, there’s nothing to say that the difference between the two offers is due to anything other than chance.

What Does Jenny Do Now?

Basically, Jenny can conclude that there’s not enough evidence that the “$1.00 off” coupon was worse/better than the “10% off” coupon in generating higher table check amounts, and vice-versa. This does not mean that our hypotheses were true or false, just that there was not enough statistical evidence to say so. In this case, we did not accept the null hypothesis, but rather, failed to reject it. Jenny can do a few things:

  1. She can run another test, and see if the same phenomenon holds.
  2. Jenny can accept the fact that both offers work equally well, and compare their overall average table checks to those of who ordered jambalaya without the coupons during the time the offer ran; if the coupons generated average table checks that were higher (using the hypothesis testing procedures outlined above) than those who paid full price, then she may choose to rollout a complete promotion using either or both of the offers described above.
  3. Jenny may decide that neither coupon offer raised average check amounts and choose not to do a full rollout after all.

So Why am I Telling You This?

The purpose of this blog post was to take you step-by-step into how you can use a simple concept like t-tests to judge the performance of two promotion concepts. Although a spreadsheet like Excel can run this test in seconds, I wanted to walk you through the theory in laymen’s terms, so that you can grasp the theory, and then apply it to your business. Analysights is in the business of helping companies – large and small – succeed at marketing, and this blog post is one ingredient in the recipe for your marketing success. If you would like some assistance in setting up a promotion test or in evaluating the effectiveness of a campaign, feel free to contact us at www.analysights.com.