Posts Tagged ‘judgmental forecasting’

Forecast Friday Topic: Other Judgmental Forecasting Methods

March 3, 2011

(Thirty-ninth in a series)

Over the last several weeks, we discussed a series of different non-quantitative forecasting methods: Delphi Method, Jury of Executive Opinion, Sales Force Composite Forecasts, and Surveys of Expectations. In today’s brief post, we’ll finish with a brief discussion of three more judgmental forecasting methods: Scenario Writing, La Prospective, and Cross-Impact Analysis.

Scenario Writing

When a company’s or industry’s long-term future is far too difficult to predict (whose isn’t!), it is common for experts in that company or industry to ponder over possible situations in which the company or industry may find itself in the distant future. The documentation of these situations – scenarios – is known as scenario writing. Scenario writing seeks to get managers thinking in terms of possible outcomes at a future time where quantitative forecasting methods may be inadequate for forecasting. Unfortunately, much literature on this approach suggests that writing multiple scenarios does not have much better quality over any of the other judgmental forecasting methods we’ve discussed to date.

La Prospective

Developed in France, La Prospective eschews quantitative models and emphasizes several potential futures that may result from the activities of individuals. Interaction among several events, many of which can be, and indeed are, dynamic in structure and constantly evolving, are studied and their impacts are cross-analyzed, and their effect on the future is assessed. La Prospective devotes considerable attention to the power, strategies, and resources of the individual “agents” whose actions will influence the future. Because the different components being analyzed can be dynamic, the forecasting process for La Prospective is often not linear; stages can progress in different or simultaneous order. And the company doing the forecasting may also be one of the influential agents involved. This helps companies assess the value of any actions the company might take. After the La Prospective process is complete, scenarios of the future are written, from which the company can formulate strategies.

Cross-Impact Analysis

Cross-Impact analysis seeks to account for the interdependence of uncertain future events. Quite often, a future event occurring can be caused or determined by the occurrence of another event. And often, an analyst may have strong knowledge of one event, and little or no knowledge about the others. For example, in trying to predict the future price of tissue, experts at companies like Kimberly-Clark, along with resource economists, forest experts, and conservationists may all have useful views. If a country that has vast acreages of timber imposes more stringent regulations on the cutting down of trees, that can result in sharp increases in the price of tissue. Moreover, if there is a major increase, or even a sharp reduction, in the incidence of influenza or of the common cold – the realm of epidemiologists – that too can influence the price of tissue. And even the current tensions in the Middle East – the realm of foreign policy experts – can affect the price of tissue. If tensions in the Middle East exacerbate, the price of oil shoots up, driving up the price of the energy required to convert the timber into paper, and also the price of gas to transport the timber to the paper mill and the tissue to the wholesalers and to the retailer. Cross-impact analysis measures the likelihood that each of these events will occur and attempts to assess the impact they will have on the future of the event of interest.

Next Forecast Friday Topic: Judgmental Bias in Forecasting

Now that we have discussed several of the judgmental forecasting techniques available to analysts, it is obvious that, unlike quantitative methods, these techniques are not objective. Because, as their name implies, judgmental forecasting methods are based on judgment, they are highly susceptible to biases. Next week’s Forecast Friday post will discuss some of the biases that can result from judgmental forecasting methods.

Forecast Friday Topic: The Delphi Method

February 17, 2011

(Thirty-eighth in a series)

Last week we discussed the role of expert judgment in making forecasts. When quantitative data are not available, or when we are trying to predict a major structural shift in the future, we often rely on those people who are well-versed and knowledgeable in the field for which we seek forecasts. The Delphi Method is one way to do this.

Developed at the start of the Cold War by the Rand Corporation, the Delphi Method has its groundings in technological forecasting, as it was designed to forecast the impact of technology on warfare. The name “Delphi” comes from the Oracle of Delphi, which in Greek Mythology foretold the future. Quantitative models are often of limited use when trying to predict far into the future. Environmental patterns, largely driven by technology changes, can be altered dramatically over long periods of time. When projecting far into the future, we want to know how probable, frequent, or intense these future forecasts are or will be. This is where Delphi comes in.

The Delphi Method is a structured, interactive, iterative communication technique joining together experts to share their opinions on the future. Unlike the Jury of Executive Opinion, which we discussed in last week’s post, this panel of experts does not meet face-to-face. This ensures that experts’ opinions are not influenced by those of other panel members. The number of experts on the panel is large, and many of them may differ greatly in their areas of expertise.

Panel members are given questionnaires and asking them a series of “what,” “if,” “what if,” or “when” questions about the future. They may even be presented with scenarios and asked to predict the probability of such a scenario occurring and when it may occur. Differences in experiences, information availability, and interpretation methods between panel members will ensure a wide diversity of views. In order to move panelists to consensus, their opinions are summarized and shared (anonymously) with the other panel members, and the panelists are encouraged to adjust their predictions based on these viewpoints. When certain panel members hold views substantially different from the group median, they are asked to provide written justification, so that the strength of their opinions can be determined. After a few iterations, the group tends to move toward a consensus forecast.

The Delphi Method is not without its drawbacks. While the absence of face-to-face meetings eliminates biased viewpoints brought on by authority, seniority, and articulation, it also greatly reduces – if not also eliminates – immediate access to the knowledge of others. Hence, panelists provide their views in isolation and, based on their experiences, may not consider certain facts in their assessments. Moreover, Delphi techniques can be expensive and time consuming, as experts’ time is at a premium, and searching for them can be intense. In addition, because the Delphi Method is used to predict several years into the future, a lot of time must be allowed to elapse before one can determine whether the method was appropriate for the task on which it was used. Finally, just because the iterative process moves experts towards a group median, it’s less clear that the process pulls the group towards the true future outcome.

Next Forecast Friday Topic: Other Judgmental Forecasting Methods

In next week’s Forecast Friday post, we will be discussing a few other judgmental forecasting approaches that are used when quantitative data is not available. The week after that, we will discuss the various judgmental biases that exist in forecasting. These next two posts will round out our discussions judgmental methods, after which we will move into our final segment of the series “Combining and Evaluating Forecasts.”

********************************************************

Follow us on Facebook and Twitter!

For the latest insights on marketing research, predictive modeling, and forecasting, be sure to check out Analysights on Facebook and Twitter! “Like-ing” us on Facebook and following us on Twitter will allow you to stay informed of each new Insight Central post published, new information about analytics, discussions Analysights will be hosting, and other opportunities for feedback. So check us out on Facebook and Twitter!

Forecast Friday Topic: Expert Judgment

February 10, 2011

(Thirty-seventh in a series)

Last week, we began our discussion of judgmental forecasting methods, talking about judgmental extrapolation, which required no real understanding of the physical process behind the time series. Today, we will talk about more sophisticated judgmental techniques that are used in subjective forecasting; “sophisticated” only in the sense that the opinion of “experts” is used in trying to predict the future. The three techniques we will discuss are the Jury of Executive Opinion, sales force composite forecasts, and surveys of expectations.

Jury of Executive Opinion

The Jury of Executive Opinion is quite often seen in an organization’s budgeting and strategic planning process. The “jury” is often a group of high-level executives from all areas of the organization – marketing, finance, human resources, manufacturing, etc. – who come together to discuss their respective areas of business and work to come up with a composite forecast of where the organization’s business will be. Each executive shares his/her opinions and weighs and evaluates those of the other executive. After discussion, the executives write down their forecasts, which are then averaged.

One example of the Jury of Executive Opinion takes me back to 1999-2000, when I worked for catalog retailer Hammacher Schlemmer. Hammacher Schlemmer convened a weekly committee to estimate the orders coming in for the next two weeks from each of the active catalogs in circulation. The committee was made up of several marketing personnel, including myself (as I was the forecasting analyst!), and managers from the warehouse, in-bound call center, inventory control, and merchandising. We would begin every Wednesday morning reviewing the number of orders that came in for each active catalog, for the prior week and the first two days of the current week. Armed with that order information, and spreadsheets detailing order history for those catalogs’ prior years, each of us would indicate our orders forecasts for the next several weeks ahead. Our forecasts were then averaged, and we would then submit the composite forecasts to the warehouse and call center to assist with their staffing, and to inventory control to ensure adequate purchasing.

One of the nice things about the Jury of Executive Opinion is its simplicity. Getting executives to sound off is often pretty easy to do. Moreover, incorporating the experiences of a broad group into the forecasting process may enable companies to see the forest beyond the trees.

However, simple and broad-focused as it may be, the Jury of Executive Opinion is not without its flaws. These meetings can be time consuming, for one. Indeed, at Hammacher Schlemmer, during the last three months of the year – when the holiday season was in tow – those weekly meetings could take all morning, as nearly a dozen catalogs could be in circulation. Furthermore, group dynamics may actually lead to unwise consensus forecasts. The group is often at risk of being swayed by the opinions of those members who are most articulate, or with greater seniority or rank within the organization, or just by their own over-optimism. Another problem is that the passage of time makes it difficult to recognize those experts whose opinions were most reliable and whose should be given less weight. As a result, there’s no way to hold any individual member accountable for a forecast. Finally, executives are more comfortable with using their opinions for mid-and longer-range planning than for shorter period-to-period predictions, especially since recent unexpected events can also influence their opinion.

Sales Force Composite Forecasts

When companies have a product that is sold by sales agents in specific territories, it is not uncommon for them to seek the opinions of their sales representatives or branch/territory managers in developing forecasts for each product line. In fact, sales representatives’ opinions can be quite useful, since they are generally close to the customer, and may be able to provide useful insights into purchase intent. Essentially, these companies have their agents develop forecasts for each of the products they sell within a territory. The added benefit of this approach is that a company can develop a forecast for the entire market, as well as for individual territories.

Indeed, when I worked in the market research department of insurance company Bankers Life & Casualty during 1997-1999, we frequently conducted surveys of our sales force and branch managers to understand how many long-term care insurance policies, Medicare Supplement policies, and annuities were being sold within each market, and how much were being lost to the competition. These surveys would provide a read into the market size for each insurance product at both a regional and national level.

While the closeness to the customer is a great advantage of sales force composite surveys, they too have problems. Sales agents have a tendency to be overly optimistic in their forecasts and may set unrealistic goals. In addition, because sales agents are close to the customer, their opinions are likely to be swayed by microeconomic decision purchases, when in fact aggregate sales are often driven by macroeconomic factors. Supplementing sales force composite forecasts with more formal quantitative forecasting methods, if possible, is often recommended.

Surveys of Expectations

We actually covered surveys of expectations in our December 9, 2010 Forecast Friday post, but let me just quickly go through it. Sometimes when data isn’t available for forecasting, companies can conduct surveys to get opinions and expectations. Marketing research in this fashion is often expensive, so often surveys of expectations are used when it is believed they will provide valuable information. Surveys work well for new product development, brand awareness, and market penetration. In our December 9, 2010 Forecast Friday topic, the audience of the expectation survey was mostly executives and other business experts. In this post, the audience is consumers.

NCH Marketing Services, both the leading processor of grocery coupons and a leading coupon promotion firm – and also a former employer of mine – used surveys to obtain information on coupon usage. The company even asked persons how many coupons they took to the store in a typical month. From there, the company would estimate the number of coupons redeemed in the U.S. annually.

Summary

Companies often must rely solely on expert judgment for looking ahead. The Jury of Executive Opinion, sales force composite forecasts, and consumer surveys are just some of the approaches companies can take to predict the future when more formal quantitative methods are either unavailable or unreliable.

Next Forecast Friday Topic: The Delphi Method

********************************************************

Follow us on Facebook and Twitter!

For the latest insights on marketing research, predictive modeling, and forecasting, be sure to check out Analysights on Facebook and Twitter! “Like-ing” us on Facebook and following us on Twitter will allow you to stay informed of each new Insight Central post published, new information about analytics, discussions Analysights will be hosting, and other opportunities for feedback. So check us out on Facebook and Twitter!

Forecast Friday Topic: Judgmental Extrapolation

February 3, 2011

(Thirty-sixth in a series)

The forecasting methods we have discussed since the start of the Forecast Friday series have been quantitative. Formal quantitative models are often quite useful for predicting the near future, as the recent past often indicates expected results for the future. However, things change over time. While predictive models might be useful in forecasting the number of visits to your Web site next month, they may be less relevant to predicting your company’s social media patterns five or 10 years from now. Technology is likely to change dramatically during that time. Hence, more qualitative, or judgmental, forecasts are often required. Thus begins the next section of our series: Judgmental Methods in Forecasting.

Yet even with short-run forecasting, human judgment should be a part of the forecasts. A time series model can’t explain why a pattern is happening; it can only make predictions based on the patterns in the series it has “learned.” It cannot take into account the current environment in which those numbers came about, or information some experts in the field have about events likely to occur. Hence, forecasts by models should never be the “be-all, end-all.”

Essentially, there are two types of judgmental forecasting: subject matter expertise, which we will discuss in next week’s post, and judgmental extrapolation, which is today’s topic. Judgmental extrapolation – also known as bold freehand extrapolation is the crudest form of judgmental forecasting, and there’s really no expertise required to do it. Judgmental extrapolation is simply looking at the graph of a time series and making projections based upon visual inspection. That’s all there is to it; no understanding of the physical process behind the time series is required.

The advantage of judgmental extrapolation (the only one I could find, anyway) is its efficiency: it doesn’t require a lot of time, effort, understanding of the series, or money. But that’s efficiency, not accuracy! Sometimes when time and money are short, judgmental extrapolation is sometimes the only way to go. But if you have a time series already, you might get better results just plugging them into Excel and using its exponential smoothing or regression tools – and even that is relatively time and cost efficient.

Unfortunately, there’s no definitive findings from the published literature on the accuracy of judgmental extrapolation. I tend to be among its skeptics. Perhaps the strongest finding I’ve seen for the accuracy of judgmental forecasts (and it’s not really an argument in favor!) is that, when shown graphs of forecasts, individuals can adjust them in ways that improve the forecasts, but only if the forecasts themselves are far from optimal! That was the finding of T. R. Willemain, in a 1991 article in the International Journal of Forecasting.

So why do I mention judgmental extrapolation? As I said before, sometimes you need to make decisions quickly and without resources or adequate information. What’s more, judgmental extrapolation’s value – though not proven – has also not been disproven. Until its value is disproven, judgmental extrapolation should be considered another tool in the forecasting arsenal.

Next Forecast Friday Topic: Expert Judgment

Today we talked about forecasts relying upon non-expert judgment. Next week, we’ll talk about judgmental forecasts that are based on the opinion of subject matter experts.

********************************************************

Follow us on Facebook and Twitter!

For the latest insights on marketing research, predictive modeling, and forecasting, be sure to check out Analysights on Facebook and Twitter! “Like-ing” us on Facebook and following us on Twitter will allow you to stay informed of each new Insight Central post published, new information about analytics, discussions Analysights will be hosting, and other opportunities for feedback. So check us out on Facebook and Twitter!

Objective and Subjective Forecasting Approaches

May 3, 2010

(second in a series)

Today we discuss the various categories of forecasting methods that are available to businesses.  Forecasting methods can be either objective (using quantitative approaches) or subjective (using more intuitive or qualitative approaches), depending on what data is available and the distance into the future for which a forecast is desired.  Forecasting approaches will typically be more objective for nearer term forecasting horizons and for events where there is plenty of quantitative data available.  More distant time periods, or events with a lack of historical quantitative data will often call for more subjective approaches.  We will discuss these two classes of forecasting methods, and the categories within each.

Objective Forecasting Approaches  

Objective forecasting approaches are quantitative in nature and lend themselves well to an abundance of data.  There are three categories of objective forecasting methods: time series, causal/econometric,  and artificial intelligence.  AI approaches are outside my experience, so I won’t be covering them in this series, but mention them as another alternative, in case you wish to investigate them on your own. 

Time Series Methods

Time series methods attempt to estimate future outcomes on the basis of historical data.  In many cases, prior sales of a product can be a good predictor of upcoming sales because of prior period marketing efforts, repeat business, brand awareness, and other factors.  When an analyst employs time series methods, he/she is assuming that the future will continue to look like the past.  In rapidly changing industries or environments, time series forecasts are not ideal, and may be useless.

Because time series data are historical, they exhibit four components that emerge over time: trend, seasonal, cyclical, and random (or irregular).  Before any forecasting is done on time series data, the data must be adjusted for each of these components.  Decomposing time series data will be discussed later in this series. 

The most common time series methods include moving average (both straight and weighted), exponential smoothing, and regression analysis.  Each of these approaches will be discussed later in the series.

Causal/Econometric Methods

Causal or econometric forecasting methods attempt to predict outcomes based on changes in factors that are known – or believed – to impact those outcomes.  For example, temperature may be used to forecast sales of ice cream; advertising expenditures may be used to predict sales; or the unemployment rate might be used to forecast the incidence of crime in a neighborhood.  It is important to note, however, that just because a model finds two events that are correlated (e.g., occur together), it does not necessarily mean that one event has caused the other.

Regression analysis also falls under the causal/econometric umbrella, as it can be used to predict an outcome based on changes in other factors (e.g., SAT score may be used to measure likelihood of being accepted to a college).  Econometric forecasting methods include  Autoregressive Moving Average (ARMA) and Autoregressive Integrated Moving Average (ARIMA) models.  ARIMA was previously known as Box-Jenkins.  ARMA and ARIMA models are used in certain cases, but most of the time are unnecessary.  Although these two methods won’t be covered in much depth later in the series, there will be a brief description of them and when they are needed.

Subjective Forecasting Approaches

Subjective forecasts are more qualitative.  These approaches rely most heavily on judgment and educated guesses, since there is little data available for forecasting.  This is especially the case in long-range forecasting.  It’s easy to forecast next week’s sales of ice cream – and possibly even of individual flavors, since you’ll likely have months or years of past weekly ice cream sales data.  However, if you’re trying to get an idea of what ice cream consumption or flavor preferences will be 10 years from now, quantitative approaches will be of little use.  Changes in tastes, technology, and political, economic, and social factors occur and can dramatically alter the course of trends.  Hence, the opinion of subject matter experts is often called upon.  There is essentially only one category of subjective forecast approaches – and it is rightly called “Judgmental” forecasts.

Judgmental Methods

Judgmental forecasting methods rely much on expert opinion and educated guesses.  But just because they have little quantitative or objective basis doesn’t mean they should be dismissed or not measured for accuracy.  The most common types of of judgmental forecasting methods are composite forecasts, extrapolation, surveys, Delphi method, scenario writing, and simulation.  Each of these methods will be discussed in detail later in the series. 

Introducing “Forecast Fridays” – ON THURSDAYS!!!

Beginning with part 3, which will discuss moving average forecasts, the forecasting series will begin posting weekly so that the remaining days of the week can still be devoted to other topics in the marketing research and analytics field.  The weekly post will be called “Forecast Friday.”  However, it will be posted every Thursday!  Why?  Find out in tomorrow’s post!


Follow

Get every new post delivered to your Inbox.

Join 85 other followers