There’s no question that marketers are more focused than ever on the ROI of marketing research.  All too often, however, it seems that efforts to improve ROI aim to get more research per dollar spent rather than better research. 

Better survey design is one sure way to improve the ROI of marketing research.  However, despite advances in our understanding of the cognitive processes involved in answering surveys, market researchers continue to write poor survey questions that may introduce considerable measurement error. 

I think this is due in part to the fact that the processes involved in asking a question are fundamentally different from the processes involved in answering that same question.  Recent contributions to our understanding of the answering process have been integrated into a theory of survey response by Roger Tourangeau, Lance J. Rips, and Kenneth Rasinski (The Psychology of Survey Response, Cambridge University Press, 2000).  According to Tourangeau, et. al., answering a survey question involves four related processes:  comprehending the question; retrieving relevant information from memory, evaluating the retrieved information, and matching the internally generated answer to the available responses in the survey question.

“Think aloud” pretesting, sometimes known as “cognitive” pretesting or “concurrent protocol analysis” is an important tool for improving the quality of survey questions, and  well-designed think aloud pretests often have, been in my experience, the difference between research that impacts a firm’s business results and research that ends up on the shelf for lack of confidence in the findings.

Warning–what follows is blatant self-promotion of a sort.  ESOMAR is offering my workshop, “Think like a respondent:  A cognitive approach to designing and testing online questionnaires” as part of Congress 2011.  The workshop is scheduled for Sunday, September 18, 2011. This year’s Congress will be held in Amsterdam.  I’ve offered the workshop once before, at the ESOMAR Online Conference in Berlin last October.

Hope to see you in Amsterdam.


I had the pleasure of participating in a lively discussion on the impact and future of “DIY” (do-it-yourself) research a few weeks ago at the recent ESOMAR Congress in Athens, Greece.  In a 90-minute “discussion space” session I shared a few thoughts about the future of the market research industry.  The other half of the program was presented by Lucy Davison of marketing consultancy Keen as Mustard and Richard Thornton of CINT.  They shared the results of some research on DIY research that they conducted among consumers of market research (i.e. “clients”).  Bottom line, many clients are favorable to DIY for a number of reasons.

For my part, I am more interested in DIY as a symptom of deep and fundamental change in the market research industry.  When I began my career in MR (on the client side at first), most research companies were vertically integrated, owning their own data collection capabilities and developing their own CATI software, for example.  This made sense when the ability to coordinate and integrate the diverse activities required for a typical research project was a competitive strength.  Perhaps you remember the days when a strategic segmentation study might have three or four phases, take six to nine months to complete, and cost $500,000 ( in 1980 dollars!).  But vertically integrated industries tend to “de-integrate” over time.  Firms may spin off or outsource some of their capabilities, creating value chain specialists who are proficient at one link in the chain.  The emergence of WATS call-centers and off-the-shelf CATI software were early steps on the march towards de-integration for the MR industry.

Technological change (especially in the form of disruptive innovation) also provides opportunity for new entrants.  Sure, some of the face-to-face interviewing companies made the transition to telephone, and many telephone interviewing companies successfully converted from paper and pencil questionnaires to CATI, but each of these shifts provided a point of entry for new players.

The large, integrated firms have managed to hang on to a substantial share of industry profits, but there are three looming threats.  The first is (so-called) “commoditization”–the downward pressure on pricing.  While some supplier side researchers complain that clients are unwilling to pay for quality, this downward pressure is the result of basic competitive dynamics:  there are many competing firms, few barriers to entry, many substitutes (e.g., transactional datamining) and not that much difference in value propositions or business models across MR firms.

The second threat is do-it-yourself research.  At the moment, DIY appeals to the least demanding and most price sensitive customers.  DIY removes the access and affordability barriers, thereby democratizing survey researchAs Lucy and Richard’s research showed, customers like the low cost, speed and convenience of DIY, and I expect many will move up the learning curve quickly.  I hope so–many of the DIY surveys I’ve seen from even big companies have been pretty ghastly. 

The last threat to the traditional MR business model comes from the sheer deluge of data generate by both commercial and non-commercial online activity.  How much could Google tell any marketer about customer preferences based just on search data, for example?

At the end of the session in Athens I offered this analogy.  Imagine that you need a bedstead.  You could go to a furniture store and choose from a selection of attractive, well-constructed and expensive bedsteads.  Or you could go to the local home improvement store, purchase some plywood and paint or stain and with a few tools (which could be borrowed or renterd) and some minimal ability, construct a perfectly serviceable platform bed–at much lower cost.  This represents the difference between the full service integrated research firms at the top of the latter and what we’ve historically thought of as do-it-yourself market research.  The gap between the two has been sustained until now by a skill barrier and limited access to better, easier to use tools.  This is the gap that Ikea filled in the home furnishing market by creating a new business model based on attractive, customer-assembled furnishings. 

Unfortunately for the incumbent research firms, this kind of business model innovation does not often come from the current players in a market.  The incumbents have too much personal investment in the current business model.  Let’s face it–most of us are in market research because we like the high-touch, intellectual problem solving that’s involved.  It’s what we’ve trained to do.  Designing something like appealing flatpack furniture that customers take home and assemble themselves just does not fit our self-image.

The smarter, easier to use tools are here.  Who will be the first to package them into a new way to deliver market research?

Copyright 2010 by David G. Bakken.  All rights reserved.

I just completed an online survey at the invitation of a company I’ve purchased from in the past.  It was obvious that the survey was an example of what the market research industry calls “D-I-Y” research.  If the quality of the questionnaire had not given this away, there was the “Powered by [name of enterprise feedback software vendor]” at the bottom of the screen.  I was asked to look at two different print ads for one of the products this company sells and answer a few questions that bore some slight resemblance to the questions you might find in an ad test conducted by one of the MR firms that specialize in that type of work.

One can only assume that the results of this survey are meant to drive a decision of which ad to run (there may be other candidates that I didn’t see).  If that’s true, then I think this may be a case where D-I-Y will turn out to be worse than no research at all.  The acid test for any market research is whether or not the decisions made on the basis of that research are “better” than the decision that would have been made without the research. (more…)

The Psychology of Survey Response by Roger Tourangeau, Lance J. Rips, and Kenneth Raskinski (Cambridge University Press, 2000) will change the way you think about the “craft” of survey design.  While there are other, well-regarded books on survey question construction (such as Asking Questions by Norman Bradburn, Seymour Sudman, and Brian Wansink, Jossey-Bass, 2004) and tons of individual research papers and articles on various aspects of survey design, measurement scales, question construction and the like, this is the first book I’ve encountered that presents a practical conceptual framework for understanding the cognitive processes that produce a response to a given question.  Moreover, the authors review a lot of relevant research to support their framework.