Best Practices


There’s no question that marketers are more focused than ever on the ROI of marketing research.  All too often, however, it seems that efforts to improve ROI aim to get more research per dollar spent rather than better research. 

Better survey design is one sure way to improve the ROI of marketing research.  However, despite advances in our understanding of the cognitive processes involved in answering surveys, market researchers continue to write poor survey questions that may introduce considerable measurement error. 

I think this is due in part to the fact that the processes involved in asking a question are fundamentally different from the processes involved in answering that same question.  Recent contributions to our understanding of the answering process have been integrated into a theory of survey response by Roger Tourangeau, Lance J. Rips, and Kenneth Rasinski (The Psychology of Survey Response, Cambridge University Press, 2000).  According to Tourangeau, et. al., answering a survey question involves four related processes:  comprehending the question; retrieving relevant information from memory, evaluating the retrieved information, and matching the internally generated answer to the available responses in the survey question.

“Think aloud” pretesting, sometimes known as “cognitive” pretesting or “concurrent protocol analysis” is an important tool for improving the quality of survey questions, and  well-designed think aloud pretests often have, been in my experience, the difference between research that impacts a firm’s business results and research that ends up on the shelf for lack of confidence in the findings.

Warning–what follows is blatant self-promotion of a sort.  ESOMAR is offering my workshop, “Think like a respondent:  A cognitive approach to designing and testing online questionnaires” as part of Congress 2011.  The workshop is scheduled for Sunday, September 18, 2011. This year’s Congress will be held in Amsterdam.  I’ve offered the workshop once before, at the ESOMAR Online Conference in Berlin last October.

Hope to see you in Amsterdam.

There’s an interesting article by Jonah Lehrer in the Dec. 13 issue of The New Yorker, “The Truth Wears Off:  Is there something wrong with the scientific method?” Lehrer reports that a growing number of scientists are concerned about what psychologist Joseph Banks Rhine termed the “decline effect.”  In a nutshell, the “decline effect” is an observed tendency for the size of an observed effect to decline over the course of studies attempting to replicate that effect.  Lehrer cites examples from studies of the clinical outcomes for a class of once-promising antipsychotic drugs as well as from more theoretical research.  This is a scary situation given the inferential nature of most scientific research.  Each set of observations represents an opportunity to disconfirm a hypothesis.  As long as subsequent observations don’t lead to disconfirmation, our confidence in the hypothesis grows.  The decline effect suggests that replication is more likely, over time, to disconfirm a hypothesis than not.  Under those circumstances, it’s hard to develop sound theory.

Given that market researchers apply much of the same reasoning as scientists in deciding what’s an effect and what isn’t, the decline effect is a serious threat to creating customer knowledge and making evidence-based marketing decisions. (more…)

An insightful new report from Boston Consulting Group reveals that “most companies have not yet unlocked the value of consumer insight.”  The report is based on a quantitative survey of more than 800 executives from 40 global companies with at least $1.5 billion in sales.  The survey was supplemented with around 200 qualitative interviews, and the participants included line managers as well as members of the consumer insight function in these companies.

The authors found that companies fall into one of four stages of consumer insight capability:

  • traditional market research function
  • business contribution team
  • strategic insight organization
  • strategic foresight organization.

The companies falling into the last two stages are getting the biggest return on their investments in consumer insight.  However, according to this report, only about 10% of the surveyed companies are in one of these two stages of insight capability.  In Stage 1 companies, the insight function is more or less an “order taker” relegated to “back room” status, and the focus is on tactical research.  Things are a little better in Stage 2 companies in that  sometimes projects are more strategic, but the insight function is still project-focused.

If the consumer insight function is relegated to back room status in the majority of companies, does that make research agencies a back room to the back room? (more…)

The New York Times is one of the more interesting innovators when it comes to using data visualization to tell a story or make a point.  In particular, the Business section employs a variety of chart forms to reveal what is happening in financial markets.  The Weather Report uses “small multiples” to show 10-day temperature trend for major U.S. Cities.

Even more interesting are the occasional illustrations that appear under the heading of “Op-Chart.”  For a few years now the Times periodically presents on the Op-Ed page a comparative table that tracks “progress” in Iraq on a number of measures such as electric power generation.

Another impressive chart appeared in “Sunday Opinion” on January 10, 2010.  Titled “A Year in Iraq and Afghanistan,” this full page illustration provides a detailed look at the 489 American and allied deaths that occurred in Afghanistan and the 141 deaths in Iraq.  At first glance, the chart resembles the Periodic Table of Elements.  Deaths in Iraq take up the top one-fourth or so of the chart (along with the legend); deaths in Afghanistan occupy the bulk of the illustration.

Each death is represented by a figure, and each figure appears in a box representing the date which the death occurred. One figure shape represents American forces, and a slightly different shape signifies a member of the coalition forces.  For coalition forces, the color of the figure indicates nationality.  A small symbol indicates the cause of each death (homemade bomb, mortar, hostile fire, bomb, suicide bomb, or non-combat related).  Multiple deaths from the same event or cause on a date occupy the same box.

Most dates have only a single death, but a few days standout as particularly tragic:  seven U.S. troops dying due to a non-combat related cause in Afghanistan on October 26; eight killed by hostile fire on October 3rd; seven killed by a homemade bomb on October 27; six Italians killed by a homemade bomb on September 17; five Americans killed by a suicide bomber in Mosul, Iraq, on April 10.

The deaths are linked to specific locations on maps of Iraq and Afghanistan.  Helmand Province was the deadliest place, with 79 of the 489 deaths in Afghanistan.  In Iraq, Baghdad was the most dangerous place, accounting for 42 of the 141 deaths in that country.  While Americans are the largest number, 112 of the dead in Afghanistan were British troops.

There is a wealth of information in this chart with four pieces of information on every death, but in some ways there is too much detail.  To get at the numbers I provided above, I had to manually count the pictures.  There are no summary statistics.  The picture grabs our attention, and immediately conveys the magnitude of the price the U.S. and our allies are paying in Afghanistan.   But if we want to act on data, we need a little more than just a very clever visual display.  Summaries of the numbers would help, here.  It’s useful to know, for example, that 65 of the 141 deaths in Iraq (46%) were due to non-combat related causes, compared to 48 (10%) of the deaths in Afghanistan.  Eighty percent of the fatalities in deadly Helmand province were due to hostile fire; 57% in other parts of Afghanistan were caused by homemade bombs (in Iraq there were 19 deaths, or 13% of the total, from homemade bombs).

Two of the creators of this chart, Adriana Lins de Albuquerque (a doctoral student in political science at Columbia) and Alicia Cheng of mgmt.design, produced a slightly different version of this chart summarizing the death toll in Iraq for 2007 (click here).  That earlier version did not have as much detail about each individual death (location information is not included, for example) but includes some additional causes, like torture and beheading that, thankfully, appear to have disappeared.

The advantage to displaying data in this fashion lies in the ability of our brains to form patterns quickly.  The use of color to designate coalition members makes the contributions of our allies apparent in a way that a simple tally might not.  Even without a year-to-year comparison, we can see that Iraq has become, at least for US troops and our allies, a much safer place than Afghanistan.  Additionally, this one chart presents data that, in other forms, might require several PowerPoint slides to communicate: deaths by date, deaths by city or province, deaths by nationality, causes of death, number killed per incident, and cause of death.

Any complex visual display of data requires making trade-offs.  In this case, for example, the creators arranged the deaths chronologically (oldest first) within each geographic block.  That means that patterns in other variables, such as cause of death or nationality of troops, may be harder to detect on first glance.  The chronological ordering has layout implications, since on some dates there were multiple casualties.

All in all, it’s a great piece of data visualization that to my mind would be even better with the addition of a few summary statistics.

A disclaimer–I counted twice to get each of the numbers I provide above, but I offer no guarantee that I am not off by one or two deaths in any of those numbers.

Copyright 2010 by David G. Bakken.  All rights reserved.

I just completed an online survey at the invitation of a company I’ve purchased from in the past.  It was obvious that the survey was an example of what the market research industry calls “D-I-Y” research.  If the quality of the questionnaire had not given this away, there was the “Powered by [name of enterprise feedback software vendor]” at the bottom of the screen.  I was asked to look at two different print ads for one of the products this company sells and answer a few questions that bore some slight resemblance to the questions you might find in an ad test conducted by one of the MR firms that specialize in that type of work.

One can only assume that the results of this survey are meant to drive a decision of which ad to run (there may be other candidates that I didn’t see).  If that’s true, then I think this may be a case where D-I-Y will turn out to be worse than no research at all.  The acid test for any market research is whether or not the decisions made on the basis of that research are “better” than the decision that would have been made without the research. (more…)

The debate over the accuracy–and quality–of survey research conducted online is flaring at the moment, at least partly in response to a paper by Yeager, Krosnick, Chang, Javitz. Levendusky, Simpson and Wang: “Comparing the accuracy of RDD telephone surveys and Internet surveys conducted with probability and non-probability samples.”  Gary Langer, director of polling at ABC News, wrote about the paper in his blog “The Numbers” on September 1. In a nutshell, the paper compares survey results obtained via random-digit dialing (RDD) with those from an Internet panel where panelists were recruited originally by means of RDD and from a number of “opt-in” Internet panels where panelists were “sourced” in a variety of ways.   The results produced by the probability sampling methods are, according to the authors, more accurate than those obtained from the non-probability Internet samples.  You can find a response from Doug Rivers, CEO of YouGov/Polimetrix (and Professor of Political Science at Stanford) at “The Numbers,” as well as some other comments.

The analysis presented in the paper is based on surveys conducted in 2004/5.  In recent years the coverage of the RDD sampling frame has deteriorated as the number of cellphone-only users has increased (to 20% currently).  In response to concerns of several major advertisers about the quality of online panel data, the Advertising Research Foundation (ARF) established an Online Research Quality Council and just this past year conducted new research comparing online panels with RDD telephone samples.  Joel Rubinson, Chief Research Office of The ARF, has summarized some of the key findings in a blog post. According to Rubinson, this study reveals no clear pattern of greater accuracy for the RDD sample.  There are, of course, differences in the two studies, both in purpose and method, but it seems that we can no longer assume that RDD samples represent the best benchmark against which to compare all other samples. (more…)

Have you heard about the Facebook Gross National Happiness Index?  On Monday, October 12, the Times ran an article (by Noam Cohen) reporting some of the findings based on analysis of two years’ worth of Facebook status updates from 100 million users in the U.S.  The index was created by Adam D. I. Kramer, a doctoral candidate in social psychology at the University of Oregon, and is based on counts of positive and negative words in status updates.  According to the article, classification of words as positive or negative is based on the Linguistic Inquiry and Word Count dictionary.

Among the researchers’ conclusions:  we’re happier on Fridays than on Mondays; holidays also make Americans happy.  The premature death of a celebrity may make us sad.  According to a post by Mr. Kramer on the Facebook blog, the two “saddest” days–days with the highest numbers of negative words–were the days on which actor Heath Ledger and pop icon Michael Jackson died.  Mr. Kramer points out that, coincidentally, Mr. Ledger died on the day of the Asian stock market crash, which might have contributed to the degree of negativity.

We’re going to see a lot more of this kind of thing as researchers delve into the rich trove of information generated by users of search engines and web-enabled social networking.  The happiness index, based as it is on simple frequency analysis of words, is the tip of the iceberg.  At the moment, “social media”–I’m not exactly sure what that label means–is getting incredible attention in the marketing and marketing research community.  The question that has yet to be posed, let alone answered, is, “what exactly do we learn from all this information?”

(more…)

Next Page »