There’s no question that marketers are more focused than ever on the ROI of marketing research.  All too often, however, it seems that efforts to improve ROI aim to get more research per dollar spent rather than better research. 

Better survey design is one sure way to improve the ROI of marketing research.  However, despite advances in our understanding of the cognitive processes involved in answering surveys, market researchers continue to write poor survey questions that may introduce considerable measurement error. 

I think this is due in part to the fact that the processes involved in asking a question are fundamentally different from the processes involved in answering that same question.  Recent contributions to our understanding of the answering process have been integrated into a theory of survey response by Roger Tourangeau, Lance J. Rips, and Kenneth Rasinski (The Psychology of Survey Response, Cambridge University Press, 2000).  According to Tourangeau, et. al., answering a survey question involves four related processes:  comprehending the question; retrieving relevant information from memory, evaluating the retrieved information, and matching the internally generated answer to the available responses in the survey question.

“Think aloud” pretesting, sometimes known as “cognitive” pretesting or “concurrent protocol analysis” is an important tool for improving the quality of survey questions, and  well-designed think aloud pretests often have, been in my experience, the difference between research that impacts a firm’s business results and research that ends up on the shelf for lack of confidence in the findings.

Warning–what follows is blatant self-promotion of a sort.  ESOMAR is offering my workshop, “Think like a respondent:  A cognitive approach to designing and testing online questionnaires” as part of Congress 2011.  The workshop is scheduled for Sunday, September 18, 2011. This year’s Congress will be held in Amsterdam.  I’ve offered the workshop once before, at the ESOMAR Online Conference in Berlin last October.

Hope to see you in Amsterdam.

There’s an interesting article by Jonah Lehrer in the Dec. 13 issue of The New Yorker, “The Truth Wears Off:  Is there something wrong with the scientific method?” Lehrer reports that a growing number of scientists are concerned about what psychologist Joseph Banks Rhine termed the “decline effect.”  In a nutshell, the “decline effect” is an observed tendency for the size of an observed effect to decline over the course of studies attempting to replicate that effect.  Lehrer cites examples from studies of the clinical outcomes for a class of once-promising antipsychotic drugs as well as from more theoretical research.  This is a scary situation given the inferential nature of most scientific research.  Each set of observations represents an opportunity to disconfirm a hypothesis.  As long as subsequent observations don’t lead to disconfirmation, our confidence in the hypothesis grows.  The decline effect suggests that replication is more likely, over time, to disconfirm a hypothesis than not.  Under those circumstances, it’s hard to develop sound theory.

Given that market researchers apply much of the same reasoning as scientists in deciding what’s an effect and what isn’t, the decline effect is a serious threat to creating customer knowledge and making evidence-based marketing decisions. (more…)

Looking back over the last year in market research offers an opportunity to consider just which transformations, new ideas, industry trends, and emerging techniques might shape MR over the next few years.  Here’s a list of eight topics I’ve been following, with thoughts on the potential impact each might have on MR over the next two or three years. (more…)

The debate over the accuracy–and quality–of survey research conducted online is flaring at the moment, at least partly in response to a paper by Yeager, Krosnick, Chang, Javitz. Levendusky, Simpson and Wang: “Comparing the accuracy of RDD telephone surveys and Internet surveys conducted with probability and non-probability samples.”  Gary Langer, director of polling at ABC News, wrote about the paper in his blog “The Numbers” on September 1. In a nutshell, the paper compares survey results obtained via random-digit dialing (RDD) with those from an Internet panel where panelists were recruited originally by means of RDD and from a number of “opt-in” Internet panels where panelists were “sourced” in a variety of ways.   The results produced by the probability sampling methods are, according to the authors, more accurate than those obtained from the non-probability Internet samples.  You can find a response from Doug Rivers, CEO of YouGov/Polimetrix (and Professor of Political Science at Stanford) at “The Numbers,” as well as some other comments.

The analysis presented in the paper is based on surveys conducted in 2004/5.  In recent years the coverage of the RDD sampling frame has deteriorated as the number of cellphone-only users has increased (to 20% currently).  In response to concerns of several major advertisers about the quality of online panel data, the Advertising Research Foundation (ARF) established an Online Research Quality Council and just this past year conducted new research comparing online panels with RDD telephone samples.  Joel Rubinson, Chief Research Office of The ARF, has summarized some of the key findings in a blog post. According to Rubinson, this study reveals no clear pattern of greater accuracy for the RDD sample.  There are, of course, differences in the two studies, both in purpose and method, but it seems that we can no longer assume that RDD samples represent the best benchmark against which to compare all other samples. (more…)

The Psychology of Survey Response by Roger Tourangeau, Lance J. Rips, and Kenneth Raskinski (Cambridge University Press, 2000) will change the way you think about the “craft” of survey design.  While there are other, well-regarded books on survey question construction (such as Asking Questions by Norman Bradburn, Seymour Sudman, and Brian Wansink, Jossey-Bass, 2004) and tons of individual research papers and articles on various aspects of survey design, measurement scales, question construction and the like, this is the first book I’ve encountered that presents a practical conceptual framework for understanding the cognitive processes that produce a response to a given question.  Moreover, the authors review a lot of relevant research to support their framework.

(more…)