Survey Research Methods


There’s no question that marketers are more focused than ever on the ROI of marketing research.  All too often, however, it seems that efforts to improve ROI aim to get more research per dollar spent rather than better research. 

Better survey design is one sure way to improve the ROI of marketing research.  However, despite advances in our understanding of the cognitive processes involved in answering surveys, market researchers continue to write poor survey questions that may introduce considerable measurement error. 

I think this is due in part to the fact that the processes involved in asking a question are fundamentally different from the processes involved in answering that same question.  Recent contributions to our understanding of the answering process have been integrated into a theory of survey response by Roger Tourangeau, Lance J. Rips, and Kenneth Rasinski (The Psychology of Survey Response, Cambridge University Press, 2000).  According to Tourangeau, et. al., answering a survey question involves four related processes:  comprehending the question; retrieving relevant information from memory, evaluating the retrieved information, and matching the internally generated answer to the available responses in the survey question.

“Think aloud” pretesting, sometimes known as “cognitive” pretesting or “concurrent protocol analysis” is an important tool for improving the quality of survey questions, and  well-designed think aloud pretests often have, been in my experience, the difference between research that impacts a firm’s business results and research that ends up on the shelf for lack of confidence in the findings.

Warning–what follows is blatant self-promotion of a sort.  ESOMAR is offering my workshop, “Think like a respondent:  A cognitive approach to designing and testing online questionnaires” as part of Congress 2011.  The workshop is scheduled for Sunday, September 18, 2011. This year’s Congress will be held in Amsterdam.  I’ve offered the workshop once before, at the ESOMAR Online Conference in Berlin last October.

Hope to see you in Amsterdam.

In November of last year David Leonhardt, an economics writer for The New York Times, created an “interactive puzzle” that enabled readers to create a solution for reducing the federal deficit by $1.3 trillion (or therebouts) in 2030.  A variety of options involving either spending cuts or tax increases that reflected the recommendations of the deficit reduction commission were offered, along with the size of the reduction associated with each option.  Visitors to the puzzle simplyselected various options until they achieved the targeted reduction.

The options represented trade-offs, the simplest being that between cutting programs or raising revenues.  Someone has to suffer, and suffering was not evenly distributed across the options.  Nearly seven thousand Twitter users completed the puzzle, and Leonhardt has summarized the choices.  You might still be able to access the puzzle online at nytimes.com/weekinreview.

Leonhardt was able to group the solutions according to whether they seemed to consist mostly of spending cuts or a mix of spending cuts and tax increases.  He admits that the “sample” is not scientific and, given that it’s comprised of Twitter users, may skew young.  Unfortunately, no personal data was collected from those who completed the puzzle, so we’re left to speculate about the patterns of choices.  Perhaps a little data mining would shed some additional light on the clustering of responses. 

Even though this is not survey resarch in the way that we know it, there may be much value in using this type of puzzle to measure public opinion about the tough choices that the U.S. is facing.  The typical opinion survey might ask respondents whether they “favor” one course of action or another (“Do you favor spending cuts or tax increases for reducing the deficit?”).  The options presented in Leonhardt’s puzzle represent real policy choices, and the differences between them force you to consider the trade-offs you are willing to make.  While the choices were comprehensive, they were not contrived in the way that conjoint analysis structures choices; that might present a problem if we are trying to develop a model to predict or explain preferences.

There’s no reason this technique cannot be used with the same kinds of samples that we obtain for much online survey research.  Add a few demographic and political orientation questions and you have what I think could be a powerful way to capture the trade-offs that the public is willing to make.

Copyright 2011 by David G. Bakken.  All rights reserved.

There’s an interesting article by Jonah Lehrer in the Dec. 13 issue of The New Yorker, “The Truth Wears Off:  Is there something wrong with the scientific method?” Lehrer reports that a growing number of scientists are concerned about what psychologist Joseph Banks Rhine termed the “decline effect.”  In a nutshell, the “decline effect” is an observed tendency for the size of an observed effect to decline over the course of studies attempting to replicate that effect.  Lehrer cites examples from studies of the clinical outcomes for a class of once-promising antipsychotic drugs as well as from more theoretical research.  This is a scary situation given the inferential nature of most scientific research.  Each set of observations represents an opportunity to disconfirm a hypothesis.  As long as subsequent observations don’t lead to disconfirmation, our confidence in the hypothesis grows.  The decline effect suggests that replication is more likely, over time, to disconfirm a hypothesis than not.  Under those circumstances, it’s hard to develop sound theory.

Given that market researchers apply much of the same reasoning as scientists in deciding what’s an effect and what isn’t, the decline effect is a serious threat to creating customer knowledge and making evidence-based marketing decisions. (more…)

Declining response rates have a been a problem in survey research for a long time.  Now a study by Lori Foster Thompson of North Carolina State University, Zhen Zhang of Arizona State University, and Richard D. Arvey of National University of Singapore, there may be a genetic predisposition to decline to participate in surveys.  Or maybe not.

The study, “Genetic underpinnings of survey response,” is to be published in the Journal of Organizational Behavior. A press release  from North Carolina State University quotes Dr. Foster:  “We wanted to know whether people are genetically predisposed to ignore requests for survey participation.  We found that there is a pretty strong genetic predisposition to not reply to surveys.”

The researchers  sent a survey to more that 1,000 sets of twins, some identical (and possessing identical DNA) and some fraternal (no more genetically similar than any two siblings).  The study found that the it was possible to predict the propensity to respond for one identical twin from the response (or non-response) of the other twin, but there was no such relationship for the fraternal twins.  The researchers “used quantitative genetic techniques to estimate the genetic, shared environmental, and nonshared environmental effects on people’s compliance with the request for survey participation” according to the paper abstract.

Notwithstanding the power of the right statistical methods, it’s very difficult to rule out plausible rival hypotheses in single generation familial inheritance studies.  I spent one summer during graduate school analyzing data from an adoption study attempting to prove the heritability of schizophrenia.  In addition to the adoption paradigm (that is, looking for differential incidence rates  among the biological and adoptive relatives of the adopted, afflicted individual), we have two types of twin studies–those that compare identical twins reared apart and those that compare sets of identical twins with sets of fraternal twins, as in this case.  Twins reared apart studies got a bad rap as a result of Cyril Burt’s fraudulent data purporting to show the heritability of intelligence.  Comparisons of identical and fraternal twins run up against the fact that having an identical twin is a very different experience from haing a fraternal twin.

I see two potential problems with this study.  First, we can’t rule out differences in interaction between identical twins and fraternal twins as a possible explanation.

The second problem–all genes are expressed at a cellular level in the form of different proteins.  Survey non-response, in contrast, is a specific and high order (far removed from cellular activity) that, really, is unlikely to be governed by a a few small chemical differences.  I believe that anyone making a claim about the heritability of any behavior ought to suggest a plausible cellular mechanism.  It’s also desirable to have some plausible selective pressure that would favor such a genetic predisposition.  Given that survey taking is a relatively recent (in human history) activity, I’m not sure you can make a case for any selective advantage in refusing to participate in surveys.

Maybe–and it’s a big maybe–there’s a selective advantage in some cluster of behaviors–such as cooperation–that just happens to manifest itself in propensity to take surveys.  That might be plausible.  Perhaps the authors offer that explanation in the full paper.  We’ll have to see.

Copyright 2010 by David G. Bakken.  All rights reserved.

I had the pleasure of participating in a lively discussion on the impact and future of “DIY” (do-it-yourself) research a few weeks ago at the recent ESOMAR Congress in Athens, Greece.  In a 90-minute “discussion space” session I shared a few thoughts about the future of the market research industry.  The other half of the program was presented by Lucy Davison of marketing consultancy Keen as Mustard and Richard Thornton of CINT.  They shared the results of some research on DIY research that they conducted among consumers of market research (i.e. “clients”).  Bottom line, many clients are favorable to DIY for a number of reasons.

For my part, I am more interested in DIY as a symptom of deep and fundamental change in the market research industry.  When I began my career in MR (on the client side at first), most research companies were vertically integrated, owning their own data collection capabilities and developing their own CATI software, for example.  This made sense when the ability to coordinate and integrate the diverse activities required for a typical research project was a competitive strength.  Perhaps you remember the days when a strategic segmentation study might have three or four phases, take six to nine months to complete, and cost $500,000 ( in 1980 dollars!).  But vertically integrated industries tend to “de-integrate” over time.  Firms may spin off or outsource some of their capabilities, creating value chain specialists who are proficient at one link in the chain.  The emergence of WATS call-centers and off-the-shelf CATI software were early steps on the march towards de-integration for the MR industry.

Technological change (especially in the form of disruptive innovation) also provides opportunity for new entrants.  Sure, some of the face-to-face interviewing companies made the transition to telephone, and many telephone interviewing companies successfully converted from paper and pencil questionnaires to CATI, but each of these shifts provided a point of entry for new players.

The large, integrated firms have managed to hang on to a substantial share of industry profits, but there are three looming threats.  The first is (so-called) “commoditization”–the downward pressure on pricing.  While some supplier side researchers complain that clients are unwilling to pay for quality, this downward pressure is the result of basic competitive dynamics:  there are many competing firms, few barriers to entry, many substitutes (e.g., transactional datamining) and not that much difference in value propositions or business models across MR firms.

The second threat is do-it-yourself research.  At the moment, DIY appeals to the least demanding and most price sensitive customers.  DIY removes the access and affordability barriers, thereby democratizing survey researchAs Lucy and Richard’s research showed, customers like the low cost, speed and convenience of DIY, and I expect many will move up the learning curve quickly.  I hope so–many of the DIY surveys I’ve seen from even big companies have been pretty ghastly. 

The last threat to the traditional MR business model comes from the sheer deluge of data generate by both commercial and non-commercial online activity.  How much could Google tell any marketer about customer preferences based just on search data, for example?

At the end of the session in Athens I offered this analogy.  Imagine that you need a bedstead.  You could go to a furniture store and choose from a selection of attractive, well-constructed and expensive bedsteads.  Or you could go to the local home improvement store, purchase some plywood and paint or stain and with a few tools (which could be borrowed or renterd) and some minimal ability, construct a perfectly serviceable platform bed–at much lower cost.  This represents the difference between the full service integrated research firms at the top of the latter and what we’ve historically thought of as do-it-yourself market research.  The gap between the two has been sustained until now by a skill barrier and limited access to better, easier to use tools.  This is the gap that Ikea filled in the home furnishing market by creating a new business model based on attractive, customer-assembled furnishings. 

Unfortunately for the incumbent research firms, this kind of business model innovation does not often come from the current players in a market.  The incumbents have too much personal investment in the current business model.  Let’s face it–most of us are in market research because we like the high-touch, intellectual problem solving that’s involved.  It’s what we’ve trained to do.  Designing something like appealing flatpack furniture that customers take home and assemble themselves just does not fit our self-image.

The smarter, easier to use tools are here.  Who will be the first to package them into a new way to deliver market research?

Copyright 2010 by David G. Bakken.  All rights reserved.

As I noted in my last post, the American Marketing Association’s Advanced Research Techniques Forum took place in San Francisco the second week in June (June 6-9).  The program is an intentional mix of presentations from academic researchers and market research practitioners.  While the practitioner presentations are often more interesting, at least from the standpoint of a fellow practitioner, this year the best and most useful presentations either came from the academic side or had significant contribution from one or more academic researchers.  In that last post I wrote about three papers that explored different aspects of social media.  Three more papers from this year’s ART make my list of the most worthwhile presentations. (more…)

I just completed an online survey at the invitation of a company I’ve purchased from in the past.  It was obvious that the survey was an example of what the market research industry calls “D-I-Y” research.  If the quality of the questionnaire had not given this away, there was the “Powered by [name of enterprise feedback software vendor]” at the bottom of the screen.  I was asked to look at two different print ads for one of the products this company sells and answer a few questions that bore some slight resemblance to the questions you might find in an ad test conducted by one of the MR firms that specialize in that type of work.

One can only assume that the results of this survey are meant to drive a decision of which ad to run (there may be other candidates that I didn’t see).  If that’s true, then I think this may be a case where D-I-Y will turn out to be worse than no research at all.  The acid test for any market research is whether or not the decisions made on the basis of that research are “better” than the decision that would have been made without the research. (more…)

Looking back over the last year in market research offers an opportunity to consider just which transformations, new ideas, industry trends, and emerging techniques might shape MR over the next few years.  Here’s a list of eight topics I’ve been following, with thoughts on the potential impact each might have on MR over the next two or three years. (more…)

…spontaneous complaints and complements are to customer loyalty management.  Like these forms of customer experience feedback, tweets are unsystematic, unorganized, and representative of who knows what underlying sentiments in the broader universe of individual experiences. (more…)

The debate over the accuracy–and quality–of survey research conducted online is flaring at the moment, at least partly in response to a paper by Yeager, Krosnick, Chang, Javitz. Levendusky, Simpson and Wang: “Comparing the accuracy of RDD telephone surveys and Internet surveys conducted with probability and non-probability samples.”  Gary Langer, director of polling at ABC News, wrote about the paper in his blog “The Numbers” on September 1. In a nutshell, the paper compares survey results obtained via random-digit dialing (RDD) with those from an Internet panel where panelists were recruited originally by means of RDD and from a number of “opt-in” Internet panels where panelists were “sourced” in a variety of ways.   The results produced by the probability sampling methods are, according to the authors, more accurate than those obtained from the non-probability Internet samples.  You can find a response from Doug Rivers, CEO of YouGov/Polimetrix (and Professor of Political Science at Stanford) at “The Numbers,” as well as some other comments.

The analysis presented in the paper is based on surveys conducted in 2004/5.  In recent years the coverage of the RDD sampling frame has deteriorated as the number of cellphone-only users has increased (to 20% currently).  In response to concerns of several major advertisers about the quality of online panel data, the Advertising Research Foundation (ARF) established an Online Research Quality Council and just this past year conducted new research comparing online panels with RDD telephone samples.  Joel Rubinson, Chief Research Office of The ARF, has summarized some of the key findings in a blog post. According to Rubinson, this study reveals no clear pattern of greater accuracy for the RDD sample.  There are, of course, differences in the two studies, both in purpose and method, but it seems that we can no longer assume that RDD samples represent the best benchmark against which to compare all other samples. (more…)