Archive for the ‘Data Mining’ Category

“Big Data” Benefits Small Businesses Too

May 8, 2014

(This post appeared on our successor blog, The Analysights Data Mine, on Monday, May 5, 2014).

One misconception about “big data” is that it is only for large enterprises. On its face, such a claim would sound logical; but, in reality, “big data” is just as vital to a small business as it is a major corporation. While the amount of data a small business generates is nowhere near as large as that which a large corporation might generate, a small business can still analyze that data to find insightful ways to run more efficiently.

Imagine a family restaurant in your local town. Such a restaurant may not have a loyalty card like a chain restaurant; it may not have any process by which to target customers; in fact, the restaurant may not even be computerized. But the restaurant still generates a LOT of useful data.

What is the richest source of the restaurant’s data? It’s the check on which the server records the table’s orders. If a restaurant saves these checks, the owner can tally the entrees, appetizers, and side orders that were made during a given period of time. This can help the restaurateur learn a lot of useful information, such as:

  • What entrée or entrees are most commonly sold?
  • What side dishes are most commonly ordered with a particular entrée?
  • What is the most popular entrée sold on a Friday or Saturday night?
  • How many refills does a typical table order?
  • What is the average number of patrons per table?
  • What are the busiest and slowest nights/times of the week?
  • How many tables and/or patrons come in on a particular night of the week?

Information like this can help the restaurateur estimate how many of each entrée it must prepare for on a given day; order sufficient ingredients for such entrees and menu items; forecast business volume for various nights of the week, and staff adequately.

In addition, such information can aid menu planning and upgrades. For example, the restaurant owner can use the above information to look for commonalities among the most popular items. Perhaps the most popular entrees sold each involve some prominent ingredient. In this case, the restaurant can direct its chef to test new entrees and menu items that feature that ingredient. Moreover, if particular entrees are not selling very well, maybe the restaurant owner can try to feature or promote them in some way, or discontinue the item altogether.

Also, in the age of social media, sites like Yelp and TripAdvisor can provide the restaurateur with free market research. If customers are complaining about long waits for service, the restaurateur may use that to increase staffing, provide extra training to the waitstaff. If reviewers are raving about specific menu items, the restaurateur can promote those items or create new entrees that are similar.

“Big Data” is a subjective and relative term. Data collected by a small family restaurant is usually not large enough to warrant the use of business intelligence tools such as SAS or SPSS to analyze it, but is still large enough to provide valuable insights for a small business to operate successfully.

 

******************************************************************************************************************************

Follow Analysights on Facebook and Twitter!

Now you can keep track of new posts on either Insight Central or our successor blog, The Analysights Data Mine, by simply “Liking” us on Facebook (look for Analysights), or by following @Analysights on Twitter.  Each time a new post is published, you will find out about it from either your Facebook newsfeed or your Twitter feed.  Thank you for following our blog; we also look forward to following you on Twitter as well!

Big Data, Big Bucks

May 6, 2014

(This post appeared last week on our successor blog, the Analysights Data Mine)

In their 1996 bestselling book, The Millionaire Next Door, Thomas J. Stanley and William D. Danko constructed profiles of the typical American millionaire.  One common characteristic the authors observed was that these millionaires “chose the right occupation.”  When Stanley and Danko wrote Millionaire, I doubt many of their research subjects were data analysts, predictive modelers, data scientists, or other “Big Data” professionals; but if they were to write a new edition today, I’ll bet there would be a lot more on the list.  “Big Data” jobs seem to be “the right occupation” today.

In a recent interview with the Wall Street Journal, veteran analytics recruiter Linda Burtch of Burtch Works predicted that job candidates with little familiarity with “Big Data” will face a “permanent pink slip,” while observing that analytics professionals earn a median base salary of $90,000 per year. Ms. Burtch distinguishes between “analytics” professionals (who typically deal with structured data sets) and “data scientists” (who typically work with large, unstructured data sets), when classifying income levels.  Data scientists, Burtch Works found, make a median base salary of $120,000.

Even more impressive is the median base salaries of entry level professionals, those with three years’ experience or less: $65,000 for analytics professionals and $80,000 for data scientists.  At nine or more years’ experience, the median base salaries rise to $115,000 and $150,000, respectively.

Much of the reason for the hefty salaries is that companies don’t often understand what skill sets they need.  Ms. Burtch mentions this in her comments to the Wall Street Journal, and I indicated as much in a previous blog post.  Add to that the fact that needed skill sets are also highly specialized and relatively few professionals have such skills, or a large pool of them.  Because of the scarcity, candidates can command such high salaries.

For companies, this suggests that in order to get the most value out of a “Big Data” hire, it must first decide the typical projects it will expect the candidate to perform, and then base the required skill set and years of experience accordingly.  Then the company can budget the salary it is willing to pay.  This will ensure that the company isn’t hiring someone with 10 years’ experience in data analytics and paying that person $120,000 per year just to pull data for mailing lists, when it should have hired someone out of college for about one-third of that.

For candidates, the breadth of skill sets employers seek in “Big Data” professionals suggests they can maximize their salaries by continuing to broaden their skills and experience within the data realm.  For example, someone with years of SAS programming and SQL experience may branch out to other programming tools, such as R and Python. Or, such a professional may expand his or her skill set by developing proficiency in data visualization tools such as Tableau of QLIKVIEW.

Working in “Big Data” may not make someone “the millionaire next door,” but it may bring him or her pretty close.

 

****************************************************************************************************************************

Follow Analysights on Facebook and Twitter!

Now you can keep track of new posts on this site and our successor site, the Analysights Data Mine, by “Liking” us on Facebook, or following us at Twitter: @Analysights.  Each time we post something new, you will automatically be notified through your Facebook newsfeed or your Twitter feeds.  We look forward to seeing you!

Company Practices Can Cause “Dirty” Data

April 28, 2014

As technical people, we often use a not-so-technical phrase to describe the use of bad data in our analyses: “Garbage in, garbage out.” Anytime we build a model or perform an analysis based on data that is dirty or incorrect, we will get results that are undesirable. Data has many opportunities to get murky, with a major cause being the way the business collects and stores it. And dirty data isn’t always incorrect data; the way the company enters data can be correct for operational purposes, but not useful for a particular analysis being done for the marketing department. Here are some examples:

The Return That Wasn’t a Return

I was recently at an outlet store buying some shirts for my sons. After walking out, I realized the sales clerk rang up the full, not sale, price. I went back to the store to have the difference refunded. The clerk re-scanned the receipt, cancelled the previous sale and re-rang the shirts at the sale price. Although I ended up with the same outcome – a refund – I thought of the problems this process could cause.

What if the retailer wanted to predict the likelihood of merchandise returns? My transaction, which was actually a price adjustment, would be treated as a return. Depending on how often this happens, a particular store can be flagged as having above-average returns relative to comparable stores, and be required to implement more stringent return policies that weren’t necessary to begin with.

And consider the flipside of this process: by treating the erroneous ring-up as a return, the retailer won’t be alerted to the possibility that clerks at this store may be making mistakes in ringing up information; perhaps sale prices aren’t being entered into the store’s system; or perhaps equipment storing price updates isn’t functioning properly.

And processing the price adjustment the way the clerk did actually creates even more data that needs to be stored: the initial transaction, the return transaction, and the corrected transaction.

The Company With Very Old Customers

Some years ago, I worked for a company that did direct mailings. I needed to conduct an analysis of its customers and identify the variables that predicted those most likely to respond to a solicitation. The company collected the birthdates of its customers. From that field, I calculated the age of each individual customer. And I found that nearly ten percent of their customers were quite old – much older than the market segments this company targeted. A deeper dive on the birthdate field revealed that virtually all of them had the same birthdate: November 11, 1911. (This was back around the turn of the millennium when companies still recorded dates with two-digit years).

How did this happen? Well, as discussed in the prior post on problem definition, I consulted the company’s “data experts.” I learned that the birthdate field was a required field for first-time customers. The call center representative could not move from the birthdate field to the next field unless values were entered into the birthdate field. Hence, many representatives in the call center simply entered “11-11-11” to bypass the field when a first-time customer refused to give his or her birthdate.

In this case, the company’s requirement to collect birthdate information met sharp resistance from customers, causing the call center to enter dummy data to get around the operational constraints. Incidentally, the company later made the birthdate field optional.

Customers Who Hadn’t Purchased in Almost a Century

Back in the late 1990s, I went to work for a catalog retailer, building response models. The cataloger was concerned that its models were generating undesirable results. I tried running the models with its data and confirmed the models to be untrustworthy. So I started running frequency distributions on all its data fields. To my surprise, I found a field, “Months since last purchase,” in which many customers had the value “999.” Wow – many customers hadn’t purchased since 1916 – almost 83 years earlier!

I knew immediately what happened. In the past, when data was often read into systems using magnetic tape, the way the data systems were programmed required all fields to be populated; if a value for a particular field was missing, the value for the next field would get read into its place, and so forth; and when the program read to the end of the record, it often went to the next record and then read values from there until all fields for the previous record were populated. This was a data nightmare.

To get around this, fields whose data was missing or unknown were filled with a series of 9s, so that all the other data would be entered into the system correctly. This process was fine and dandy, as long as the company’s analysts accounted for this practice during their analysis. The cataloger, however, would run its regressions using those ‘999s,’ resulting in serious outliers, and regressions of little value.

In this case, the cataloger’s attempt to rectify one data malady resulted in a new data malady. I corrected this by recoding the values, breaking those whose last purchase date was known into intervals, and using ranking values: a 1 for the most recent customers, a 2 for the next most recent, a 3 for the next most recent, and so forth, and gave the lowest rank to those whose last purchase was unknown.

The Moral of the Story

Company policy is a major cause of dirty data. These examples – which are far from comprehensive – illustrate how the way data is entered can cause problems. Often, a data fix proves shortsighted, as it causes new problems down the road. This is why it is so important for analysts to consult the company’s data experts before undertaking any major data mining effort. Knowing how a company collects and stores data and making allowances for it will increase the likelihood of a successful data mining effort.

How “Big Data” Can Improve Educational Outcomes

April 23, 2014

Our news media frequently inundates us with study upon study of how the American education system trails most other advanced countries in math and science, graduation rates, or some other metric of education performance.  I disagree strongly with most of these studies for reasons I won’t go into, except to say that many of their researchers cherry-pick data and then use the most alarming findings for media sound bites.  But, let us take these studies at face value for a moment and assume their findings are correct.  What then do we do about our “failing” education system?

Big Data to the Rescue

Education is a treasure trove of data; only recently have schools been making use of this data to improve outcomes in education, and much of their work to date is only scratching the surface.

Schools collect data on several attributes: a student’s progress in each subject over time; the teacher for each subject; the instruction styles for each teacher; the student’s likes and dislikes; whether students drop out or graduate; demographic, neighborhood, and socioeconomic characteristics of each student; teacher tenure and training; and so on.  Consider the ways schools might use such data to improve educational outcomes:

  • Identify factors that drive subject failure or school dropout, and predict which students are at highest risk of either event, and intervene;
  • Enhance professional development of teachers by identifying areas of their teaching styles and methods that more most and least effective;
  • Identify the types of environments under which individual students perform best and tailor their curriculum accordingly;
  • Identify ineffective curricula and instruction and direct school resources to ones that are more effective;
  • Determine whether underperforming students are clustered within a particular classroom and drill down to determine whether the teacher needs additional training or resources, or if he/she has a larger number of students with special needs; and
  • Predict whether a student is more likely to succeed in a college-preparatory or vocational environment and tailor his or her curriculum accordingly.

This list is far from comprehensive. Keep in mind, however, just as with business situations, educational institutions must use “Big Data” judiciously in trying to enhance educational outcomes; the constraints under which the schools operate, especially those governing the use of student and teacher data, must still be taken into account before a school undertakes a data mining effort and again before taking action based on the findings from that effort.  Getting buy-in from parents and other community stakeholders is essential to ensuring that a school’s data mining efforts are successful.

As I said earlier, I don’t believe a lot of the studies about the performance of U.S. schools.  If their findings are indeed true, then “Big Data” can be quite useful in identifying and rectifying problem areas; if the findings are not true, then the data mining effort can make the performance of our schools even better.  But as with any organization wishing to use data mining, school administrators must decide what problem or problems they want data mining to solve and follow the steps as described in my last blog post.  The rules, caveats, and benefits of “Big Data” apply just as much to public sector industries like education as they do to for-profit industries.

Big Data Success Starts With Well-Defined Business Problem

April 18, 2014

(This post also appears on our successor blog, The Analysights Data Mine).

Lots of companies are jumping on the “Big Data” bandwagon; few of them, however, have given real thought to how they will use their data or what they want to achieve with the knowledge the data will give them.  Before reaping the benefits of data mining, companies need to decide what is really important to them.  In order to mine data for actionable insights, technical and business people within the organization need to discuss the business’ needs.

Data mining efforts and processes will vary, depending on a company’s priorities.  A company will use data very differently if its aim is to acquire new customers than if it wants to sell new products to existing customers, or find ways to reduce the cost of servicing customers.  Problem definition puts those priorities in focus.

Problem definition isn’t just about identifying the company’s priorities, however.  In order to help the business achieve its goals, analysts must understand the constraints (e.g., internal privacy policies, regulations, etc.) under which the company operates, whether the necessary data is available, whether data mining is even necessary to solve the problem, the audience at whom data mining is directed, and the experience and intuition of the business and technical sides.

What Does The Company Want to Solve?

Banks, cell phone companies, cable companies, and casinos collect lots of information on their customers.  But their data is of little value if they don’t know what they want to do with it.  In the banking industry, where acquiring new customers often means luring them away from another bank,  a bank’s objective might be to cross-sell, or get its current depositors and borrowers to acquire more of – its products, so that they will be less inclined to leave the bank.  If that’s the case, then the bank’s data mining effort will involve looking at the products its current customers have and the order and manner in which the customer acquired those products.

On the other hand, if the bank’s objective is to identify which customers are at risk of leaving, its data mining effort will examine the activity of departing households in the months leading up to their defection, and compare it to those households it retained.

If a casino’s goal is to decide on what new slot machines to install, its data mining effort will look at the slot machine themes its top patrons play most and use that in its choice of new slot machines.

Who is the Audience the Company is Targeting?

Ok, so the bank wants to prevent customers from leaving.  But do they want to prevent all customers from leaving?  Usually, only a small percentage of households account for all of a bank’s profit; many banking customers are actually unprofitable.  If the bank wants to retain its most profitable customers, it needs only analyze that subgroup of its customer base.  The bank’s predictions of its premier customers’ likelihood to leave based on a model developed on all its customers would be highly inaccurate.  In this case, the bank would need to build a model only on its most profitable customers.

Does the Problem Require Data Mining?

Data mining isn’t always needed.  Years ago, when I was working for a catalog company, I developed regression models to predict which customers were likely to order from a particular catalog.  When a model was requested for the company’s holiday catalog, I was told that it would go to 85 percent of the customer list.  When such a large proportion of the customer base – or the entire customer base for that matter – is to receive communication, then a model is not necessary.  More intuitive methods would have sufficed.

Is Data Available?

Before a data mining effort can be undertaken, the data necessary to solve the business problem must be available or obtainable.  If a bank wants to know the next best product to recommend to its existing customers, it needs to know the first product these customers acquired, how they acquired it, the length of time between their acquisition of their second product, then their third product, and so forth. The bank also needs to understand what products its customers acquired simultaneously (such as a checking account and a credit card), current activity with those products, and the sequence of product acquisition (e.g., checking account first, savings account second, certificate of deposit third, etc.).

It is extremely important that analysts consult both those on the business side and the IT department about the availability of data.  These internal experts often know what data is collected on customers, where it resides, and how it is stored.  In many cases, these experts may have access to data that doesn’t make it into the enterprise’s data warehouse.  And they may know what certain esoteric values for fields in the data warehouse mean.  Consulting these experts can save analysts a lot of time in understanding the data.

Under What Constraints Does the Business Operate?

Companies have internal policies regulating how their operation; are subject to regulations and laws governing the industries and localities in which they operate; and also are bound by ethical standards in those industries and locations.

Often, a company has access to data that, if used in making business decisions, can be illegal or viewed as unethical.  The company doesn’t acquire this data illegally; the data just cannot be used for certain business practices.

For example, I was building customer acquisition models for a bank a few years ago.  The bank’s data warehouse had access to summarized credit score statistics by block groups, as defined by the U.S. Bureau of the Census.  However, banks are subject to the Community Reinvestment Act (CRA), a 1977 law that was passed to prevent banks from excluding low- to moderate-income neighborhoods in their communities from lending decisions.  Obviously, credit scores are going to be lower in lower-income areas. Hence, under CRA guidelines, I could not use the summarized credit statistics to build a model for lending products.  I could, however, use those statistics for a model for deposit products; for post campaign analysis, to see which types of customers responded to the campaign; and also to demonstrate compliance with the CRA.

In addition, the bank’s internal policies did not allow the use of marital status in promoting products.  Hence, when using demographic data that the bank purchased, I had to ignore the field, “married” when building my model.  In cases like these, less direct approaches can be used.  The purchased data also contained a field called “number of adults (in the household).  This was totally appropriate to use, since it did not necessarily mean that a household with two adults was a married-couple household.

Again, the analyst must consult the company’s business experts so it can understand these operational constraints.

Are the Business Experts’ Opinions and Intuition Spot-On?

It’s often said that novices make mistakes out of ignorance and veterans make mistakes out of arrogance.  The business experts have a lot of experience in the company and a great deal of intuition, which can be very insightful.  However, they can be wrong too.  With every data mining effort, the data must be allowed to tell the story.  Does the data validate what the experts say?  For example, most checking accounts are automatically bundled with a debit card; a bank’s business experts know this; and the analysis will often bear this out.

However, if the business experts say that a typical progression in a customer’s banking relationship starts with demand deposit accounts (e.g., checking accounts) then consumer lending products (e.g., auto and personal loans), followed by time deposits (e.g., savings accounts and certificates of deposit), does the analysis confirm that?

 

Problem definition is the hardest, trickiest, yet most important, prerequisite to getting the most out of “Big Data.”  By knowing what the business needs to solve, analysts must also consider the audience the data mining effort is targeting; whether data mining is necessary; the availability of data and the conditions under which it may be used; and the experience of the business experts.  Effective problem definition begets data mining efforts that produce insights a company can act upon.