Posts Tagged ‘E-mail marketing’

Using Statistics to Evaluate a Promotion

May 25, 2010

Marketing – as much as cashflow – is the lifeblood of any business. No matter how good your product or service may be, it’s worthless if you can’t get it in front of your customers and get them to buy it. So all businesses, large and small, must engage in marketing. And we see countless types of marketing promotions or tactics being tried: radio and TV commercials, magazine and newspaper advertisements, public relations, coupons, email blasts, and so forth. But are our promotions working? The merchant John Wannamaker, often dubbed the father of modern advertising is said to have remarked, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.”

Some basic statistics can help you evaluate the effectiveness of your marketing and take away much of the mystique Wannamaker complained about. When deciding whether to do a promotion, managers and business owners have no way of knowing whether it will succeed; in fact, in today’s economy, budgets are still tight. The cost to roll out a full promotion can wipe out an entire marketing budget if it proves to be a fiasco. This is why many businesses do a test before doing a complete rollout. The testing helps to reduce the amount of uncertainty involved in an all-out campaign.

Quite often, large companies need to choose between two or more competing campaigns for rollout. But how do they know which will be effective? Consider the example of Jenny Kaplan, owner of K-Jen, a New Orleans-style restaurant. K-Jen serves up a tasty jambalaya entrée, which is priced at $10.00. Jenny believes that the jambalaya is a draw to the restaurant and believes that by offering a discount, she can increase the average amount of the table check. Jenny decides to issue coupons via email to patrons who have opted-in to receive such promotions. She wants to knock a dollar off the price of the jambalaya as the offer, but doesn’t know whether customers would respond better to an offer worded as “$1.00 off” or as “10% off.” So, Jenny decides to test the two concepts.

Jenny goes to her database of nearly 1,000 patrons and randomly selects 200 patrons. She decides to send half of those a coupon for $1.00 off for jambalaya, and the other half a coupon for 10% off. When the coupon offer expires 10 days later, Jenny finds that 10 coupons were redeemed for each offer – a redemption rate of 10% each. Jenny observes that either wording will get the same number of people to respond. But she wonders which offer generated the largest table check. So she looks at the guest checks to which the coupons were stapled. She notices the following:

Guest Check Amounts

 

Offer

 
 

$1.00 off

10% Off

 
 

$38.85

$50.16

 
 

$36.97

$54.44

 
 

$35.94

$32.20

 
 

$54.17

$32.69

 
 

$68.18

$51.09

 
 

$49.47

$46.18

 
 

$51.39

$57.72

 
 

$32.72

$44.30

 
 

$22.59

$59.29

 
 

$24.13

$22.94

 

 

Jenny quickly computes the average for each offer. The “$1.00 off” coupon generated an average table check of $41.44; the “10% off” coupon generated an average of $45.10. At first glance, it appears that the 10% off promotion generated a higher guest check. But is that difference meaningful, or is it due to chance? Jenny needs to do further analysis.

Hypothesis Testing

How does Jenny determine if the 10% off coupon really did better than the $1.00 off coupon? She can use statistical hypothesis testing, which is a structured analytical method for comparing the difference between two groups – in this case, two promotions. Jenny starts her analysis by formulating two hypotheses: a null hypothesis, which states that there is no difference in the average check amount for either offer; and an alternative hypothesis, which states that there is, in fact, a difference in the average check amount between the two offers. The null hypothesis is often denoted as H0, and the alternative hypothesis is denoted as HA. Jenny also refers to the $1.00 off offer as Offer #1, and the 10% off offer as Offer #2. She wants to compare the means of the two offers, the means of which are denoted as μ1 and μ2, respectively. Jenny writes down her two hypotheses:

H0: The average guest check amount for the two offers is equal.

HA: The average guest check amount for the two offers is not equal.

Or, more succinctly:

H0: μ12

HA: μ1≠μ2

 

Now, Jenny is ready to go to work. Note that the symbol μ denotes the population she wants to measure. Because Jenny did her test on a portion – a sample – of her database, the averages she computed were the sample average, which is denoted as . As we stated earlier, the average table checks for the “$1.00 off” and “10% off” offers were 1=$41.44 and 2=$45.10, respectively. Jenny needs to approximate μ using . She must also compute the sample standard deviation, or s for each offer.

Computing the Sample Standard Deviation

To compute the sample standard deviation, Jenny must subtract the mean of a particular offer from each of its check amounts in the sample; square the difference; sum them up; divide by the total observations minus 1(9) and then take the square root:

$1.00 Off

Actual Table Check

Average Table Check

Difference

Difference Squared

$38.85

$41.44

-$2.59

$6.71

$36.97

$41.44

-$4.47

$19.99

$35.94

$41.44

-$5.50

$30.26

$54.17

$41.44

$12.73

$162.03

$68.18

$41.44

$26.74

$714.97

$49.47

$41.44

$8.03

$64.46

$51.39

$41.44

$9.95

$98.98

$32.72

$41.44

-$8.72

$76.06

$22.59

$41.44

-$18.85

$355.36

$24.13

$41.44

-$17.31

$299.67

   

Total

$1,828.50

   

S21=

$203.17

   

S1=

$14.25

 

10% Off

Actual Table Check

Average Table Check

Difference

Difference Squared

$50.16

$45.10

$5.06

$25.59

$54.44

$45.10

$9.34

$87.22

$32.20

$45.10

-$12.90

$166.44

$32.69

$45.10

-$12.41

$154.03

$51.09

$45.10

$5.99

$35.87

$46.18

$45.10

$1.08

$1.16

$57.72

$45.10

$12.62

$159.24

$44.30

$45.10

-$0.80

$0.64

$59.29

$45.10

$14.19

$201.33

$22.94

$45.10

-$22.16

$491.11

   

Total

$1,322.63

   

S22=

$146.96

   

S2=

$12.12

 

Notice the denotation of S2. That is known as the variance. The variance and the standard deviation are used to measure the average distance between each data point and the mean. When data are normally distributed, about 95% of all observations fall within two standard deviations from the mean (actually 1.96 standard deviations). Hence, approximately 95% of the guest checks for the $1.00 off offer should fall between $41.44 ± 1.96*($14.25) or between $13.51 and $69.37. All ten fall within this range. For the 10% off offer, about 95% will fall between $45.10 ± 1.96*($12.12), or between $21.34 and $68.86. All 10 observations also fall within this range.

Degrees of Freedom and Pooled Standard Deviation

Jenny noticed two things immediately: first, that the 10% off coupon has the higher sample average, and second each individual table check is closer to it mean than it is for the $1.00 off coupon. Also notice that when we were computing the sample standard deviation for each offer, Jenny divided by 9, and not 10. Why? Because she was making estimates of the population standard deviation. Since samples are subject to error, we must account for that. Each observation gives us information into the population’s actual values. However, Jenny had to make an estimate based on that sample, so she gives up one observation to account for the sampling error – that is, she lost a degree of freedom. In this example, Jenny has 20 total observations; since she estimated the population standard deviation for both offers, she lost two degrees of freedom, leaving her with 18 (10 + 10 – 2).

Knowing the remaining degrees of freedom, Jenny must pool the standard deviations, weighting them by their degrees of freedom. This would be especially evident if the sample sizes of the two offers were not equal. The pooled standard deviation is given by:

FYI – n is simply the sample size. Jenny then computes the pooled standard deviation:

S2p = ((9 * $203.17) + (9 * $146.96))) / (10 + 10 – 2)

= ($1,828.53 + $1,322.64)/18

= $3,151.17/18

= $175.07

Now take the square root: $13.23

Hence, the pooled standard deviation is $13.23

Computing the t-Test Statistic

Now the fun begins. Jenny knows the sample mean of the two offers; she knows the hypothesized difference between the two population means (which we would expect to be zero, if the null hypothesis said they were equal); she knows the pooled standard deviation; she knows the sample size; and she knows the degrees of freedom. Jenny must now calculate the t-Test statistic. The t-Test Statistic, or the t-value, represents the number of estimated standard errors the sample average is from that of the population. The t-value is computed as follows:

 

So Jenny sets to work computing her t-Test Statistic:

t = (($41.44 – $45.10) – (0)) / ($13.23) * SQRT(1/10 + 1/10)

= -$3.66 / ($13.23 * SQRT(1/5))

=-$3.66 / ($13.23 * .45)

=-$3.66/$5.92

= -0.62

This t-statistic gives Jenny a basis for testing her hypothesis. Jenny’s t-statistic indicates that the difference in sample table checks between the two offers is 0.62 standard errors below the hypothesized difference of zero. We now need to determine the critical t – the value that we get from a t-distribution table that is available in most statistics textbooks and online. Since we are estimating with a 95% confidence interval, and since we must account for a small sample, our critical t-value is adjusted slightly from the 1.96 standard deviations from the mean. For 18 degrees of freedom, our critical t is 2.10. The larger the sample size, the closer to 1.96 the critical t would be.

So, does Jenny Accept or Reject her Null Hypothesis (Translation: Is the “10% Off” Offer Better than the “$1.00 Off” Offer)?

Jenny now has all the information she needs to determine whether one offer worked better than the other. What does the critical t of 2.10 mean? If Jenny’s t-statistic is greater than 2.10, or (since one offer can be lower than the other), less than -2.10, then she would reject her null hypothesis, as there is sufficient evidence to suggest that the two means are not equal. Is that the case?

Jenny’s t-statistic is -0.62, which is between -2.10 and 2.10. Hence, it is within the parameters. Jenny should not reject H0, since there is not enough evidence to suggest that one offer was better than the other at generating higher table checks. In fact, there’s nothing to say that the difference between the two offers is due to anything other than chance.

What Does Jenny Do Now?

Basically, Jenny can conclude that there’s not enough evidence that the “$1.00 off” coupon was worse/better than the “10% off” coupon in generating higher table check amounts, and vice-versa. This does not mean that our hypotheses were true or false, just that there was not enough statistical evidence to say so. In this case, we did not accept the null hypothesis, but rather, failed to reject it. Jenny can do a few things:

  1. She can run another test, and see if the same phenomenon holds.
  2. Jenny can accept the fact that both offers work equally well, and compare their overall average table checks to those of who ordered jambalaya without the coupons during the time the offer ran; if the coupons generated average table checks that were higher (using the hypothesis testing procedures outlined above) than those who paid full price, then she may choose to rollout a complete promotion using either or both of the offers described above.
  3. Jenny may decide that neither coupon offer raised average check amounts and choose not to do a full rollout after all.

So Why am I Telling You This?

The purpose of this blog post was to take you step-by-step into how you can use a simple concept like t-tests to judge the performance of two promotion concepts. Although a spreadsheet like Excel can run this test in seconds, I wanted to walk you through the theory in laymen’s terms, so that you can grasp the theory, and then apply it to your business. Analysights is in the business of helping companies – large and small – succeed at marketing, and this blog post is one ingredient in the recipe for your marketing success. If you would like some assistance in setting up a promotion test or in evaluating the effectiveness of a campaign, feel free to contact us at www.analysights.com.

 

Advertisements

Don’t Confuse E-mail Selling with E-mail Marketing

April 27, 2009

In E-mail Marketing vs. E-mail Sales, e-mail marketing expert and independent consultant, Jeanne Jennings, discussed how some companies are confusing e-mail marketing with e-mail selling, and thus not reaping much of the benefits e-mail marketing can offer them.

Jennings points out that e-mail marketing is focused on longer-term objectives, while e-mail sales are geared towards immediate revenue, and that companies who send nothing but promotional e-mails tend to fatigue their lists, as well as limit their audience only to customers and prospects who are already at that stage in the purchasing cycle.  I could not agree more.

Jennings reiterates what we marketers must never forget: E-mail marketing – as any marketing –  is more than selling; it’s also brand-building, relationship-building, keeping your company at the top of your customers’ minds, and exchanging information between you and your customers.  Concentrating your e-mails solely on short-term sales can cost you greatly in foregone future repeat sales that often accompany good customer relationships.

The ideal proportion of your e-mail marketing messages that should be non-promotional vs. promotional varies by industry, product category, and other factors.  However, your company can reap great benefits from a healthy mix of these two types of messages.

Sending a promotional e-mail will likely succeed if a prospect is in the buying stage.  Another e-mail that offers news and tips on how to use your product may help increase product usage and create customer loyalty.  An e-mail that illustrates the benefits of your product or service may help a prospect who is still in the needs discovery phase of the buying cycle to think of your company when he/she is ready to buy.  Sending a confirmation e-mail after a purchase – coupled with additional information –  can trigger some impulse spending by your customer.

Another useful benefit of taking a long-term focus to e-mail marketing is that link-clicks are trackable.  You can track the behavior of someone on your e-mail list when he/she opens your e-mail and clicks on a link.  This can yield valuable clues about the type of content that interests the prospect, and can help you tailor both your non-promotional and promotional e-mails to the prospects’ preferences.  When you send your prospects and customers e-mail that interests them, they believe you have their best interest in mind, and they are more likely to buy from you.

In these recessionary times, companies need to make sales.  But hard selling, whether online or off, is a sign of desperation.  Companies whose marketing demonstrates  – in every channel – that they understand and care about their customers will more than make up for today’s lost sales tomorrow.

Three Metrics for E-mail Marketing Excellence

April 24, 2009

The principles of direct marketing apply just as much online as they do offline.  The process for tracking the performance of an e-mail campaign is essentially the same as for that of a direct mail campaign.

How do you know if your e-mail campaigns are working?  Start with three basic statistics: your bounce rate, your open rate, and your click-through rate.

Bounce Rate

The bounce rate tells you the percentage of your e-mails that were returned because they were undeliverable.  If you sent 10,000 e-mails and 1,000 were undeliverable, your bounce rate is 10%.  The 9,000 e-mails that were delivered are known as your non-bounce total.

Use the bounce rate to assess the quality and recency of your e-mail list.  Eyeball the list of e-mail addresses that bounced back.  You may find that some are simply invalid addresses (“,com” instead of “.com”) which can easily be rectified.  Others may be incomplete and thus useless.  Still other addresses might be old, which suggests you should have a continuous process in place for your customers and prospects to update their e-mail addresses.

Reducing bounce rate should be an ongoing objective of your e-mail marketing strategy.

Open Rate

The open rate is the number of recipients who opened an  HTML version of your e-mail, expressed as a percentage of your non-bounce total.  The open rate can give you an idea of how compelling and attention-getting your e-mail is.  Continuing with the example above, if 1,800 recipients opened your e-mail, then you have an open rate of 20% (1,800/9,000). 

The “HTML version” and the non-bounce total are very important components of this definition.  E-mail Service Providers (ESPs) can only track HTML e-mail messages, not text.  And the use of the non-bounce total has its own share of problems, because the non-bounce total isn’t synonymous with the total e-mails delivered.

E-mails may not be considered bounced because some e-mail servers inadvertently send them to a junk folder on the recipient’s computer, which he/she cannot access.  Furthermore, if the e-mail isn’t bounced by the server, but by a portable device or software on the recipient’s computer, it will not show up in your e-mail tracking report.  Hence, you are basing your open rate on the number of e-mails sent, as opposed to delivered.

An additional problem with the open rate lies in the definition of “opened.”  Your e-mail is considered “opened” if the recipient either 1) opens it in full view or lets its images display in the preview pane, or 2) clicks a link in the e-mail.   The preview pane is a double-edged sword: If the recipient let the images of your e-mail display in the pane, your open rate may be overstated.  On the other hand, if the recipient didn’t allow images to show in the pane, but scanned the e-mail, you open rate will be understated.

You might want to use some qualitative methods to estimate the degree to which these flaws exist.  For example, a survey may give you an idea of the percentage of your customers who use the preview pane and allow images to display; a test of 100 pre-recruited members of your list to receive your e-mail (who report whether or not they received it) might give you clues into how many non-bounced e-mails weren’t sent.  This will help you place a margin of error around your open rate.

If you find your open rates declining over several campaigns, that is a sign to make your messages more compelling.

 Click-Through Rate

Your click-through rate tells you the percentage of unique individuals who click at least one link in your opened e-mail.  If, from the 1,800 e-mails were opened, 180 recipients clicked at least one link, then your click-through rate is 10% (of your opened e-mails).  It’s important that you subtract multiple clicks by a recipient (whether he/she clicked more than one link, or one link several times), in order to prevent double counting.  Most ESPs do this for you seemlessly.

Your click-through rate is a measure of how well your e-mail calls your prospects to action.  Low or declining click-through rates suggest your e-mail message isn’t generating interest or desire.

Always remember to:

  1. Track every e-mail campaign you do;
  2. Look at non-click activity (increased store traffic, phone inquiries, etc.) that occur immediately following an e-mail campaign;
  3. Look at activity to your Web site immediately following an e-mail campaign; and
  4. Track your metrics over time.  Look for trends in these metrics to refine and improve your e-mail marketing results.

  

“…And then there are statistics”

March 18, 2009

Benjamin Disraeli once said, “There are lies, damned lies, and then there are statistics.”  Everyday we are bombarded with all kinds of statistics: the unemployment rate, a baseball player’s batting average, starting salaries for teachers, findings from a recent survey, and so on. In the marketing research field, we live, breathe and dream statistics.

Yet, most people do not fully understand statistics and many blindly take them at face value. As a marketing research consultant, I love when – in fact, I insist that – my clients challenge the statistics I give them. Since marketing research can be expensive, clients should make their research vendors defend their claims; it is their right.  

Today’s post is to give you an example of how to analyze statistics critically.  The example I share is from a UK blog post, entitled, Report: Using analytics can boost email marketing returns, by Greig Daines.  In this post, Daines references a report from Nedstat, a European web analytics firm.  Let me reiterate before I go any further that my purpose here is NOT to say the claims or findings in this report are wrong or biased; the purpose is to show how to place the findings in perspective. 

The Nedstat report claims that, according to a survey it conducted, the e-mail marketing revenues of firms that use web analytics are almost quadruple those of firms that do not.  The survey also found that companies can increase their profits from e-mail marketing 18-fold by using  web analytics to track and refine their e-mail campaigns.  Impressive, isn’t it?

That’s where the critical thinking begins.  The first question to ask is: “Who conducted the survey?”, followed by “What is their motive?”  Nedstat is a web analytics firm.  They certainly want people to see the value of web analytics!  Also, you need to ask: “Who was surveyed?” and “How many were surveyed?”  According to the report, 159 e-mail marketers in the UK, France, and Germany were surveyed.

“How were these 159 e-mail marketers selected?”  If these 159 e-mail marketers were selected randomly – that is every e-mail marketer in these three countries had an equal chance of being selected for the survey, then Nedstat has a representative sample.  On the other hand, if these 159 marketers self-selected to do the survey, the sample is not representative, and the claims made above cannot be generalized to all e-mail marketers in those countries.

“How many people were invited to take the survey?”  Surely, 159 marketers were surveyed; but what if 1,590 marketers in all had been invited to participate?  Then only 10% of those invited responded.  Are the 159 e-mail marketers different from the 1,431 e-mail marketers who did not respond?  Perhaps the 1,431 non-responders felt that their web analytics efforts were not working, and didn’t want to divulge that in the survey.  Maybe they were very successful and didn’t want to alert their competition.  Maybe the 159 responders had a vested interest in the field web analytics and wanted to sound off.  If any of these are the case, the survey findings are bogus.

Other questions you need to ask: “What was the average revenue and profit of the e-mail marketers who did and didn’t use web analytics.”  18-fold and four-fold don’t mean anything until you know the average.  “What was the standard deviation for revenues and profits?”  That is, how spread out is the data.  There are many more questions you can ask, but these are enough to get you in the driver’s seat.

Remember:

  1. Make sure your market research vendor clearly explains its methodology for data collection and analysis;
  2. Consider the source of the data.  There’s always a purpose for their publishing those numbers;
  3. Make sure you are given all the statistics you need to know the full story so that you can make the most informed decision; and
  4. Go with your gut.  If research findings sound too good to be true, most likely they are.  Challenge your vendor all the more.