CareerCast.Com has released its list of “most underrated jobs” for 2012.  Number 5 on the list?–Market research analyst.  Ahead of MR on the list are computer systems analysts, civil engineers, veterinarians, and biologists.  These jobs make the list on the basis of projected growth in employment, relatively good compensation, and lower stress levels than more glamorous or otherwise high-profile occupations.  Here’s what CareerCast says about market research as a career:  “One of the fastest growing fields per the [Bureau of Labor Statistics], market research analyst makes a vital impact on the direction of business decisions by applying data of economic and technological trends.” (emphasis added)

ESOMAR Congress 2012 took place in Atlanta, Georgia during the second week in September.  This was the first time the Congress has been held in the United States, reflecting the extent to which ESOMAR has become the most important global organization of market and social researchers.  This also happens to be the 65th anniversary year for ESOMAR.

The current and past ESOMAR Councils, together with the Director General, Finn Rabin, and the rest of the ESOMAR team, have done a terrific job in building a robust and resilient organization and a public voice for market research.  My desire to serve ESOMAR as a council member does not reflect any dissatisfaction with the direction of the organization.  Rather, I believe that I can make a contribution to the continued growth and health of ESOMAR.

My involvement in ESOMAR has grown steadily since I attended my first ESOMAR-sponsored conference ten years ago (Automotive–where I had the privilege of presenting my first ESOMAR paper).  More papers and events followed (Consumer Insights, Asia Pacific, and Congress).  I was honored beyond imagination to have my 2010 Congress paper, “Riding the Value Shift in Market Research,” selected for the Excellence Award.  More recently I’ve served on the juries for the Effectiveness Award and for this year’s Excellence Award and I presented a workshop on the cognitive aspects of survey design (“Think Like a Respondent”) at the Online conference in 2010.

Outside of ESOMAR I’ve been involved with the American Marketing Association, CASRO, and the American Psychological Association for many years.  I served as president of my local AMA chapter, leading the successful turnaround of a struggling chapter.

As I expressed in my Candidate Statement, I have three particular areas of interest that support ESOMAR’s overall mission of encouraging, advancing, and elevating market research throughout the world.  These areas are:  professional development, collaboration with other related MR organizations, and finding new business models that will enable sustained growth for MR.

ESOMAR’s continued growth will depend on both capturing and reflecting the diversity of global market research.  The election rules insure a measure of geographical diversity.   It’s equally important to have diversity of experience and industry perspective.  I believe that I bring a unique point of view–as do the other nominees–that will help me contribute even more to ESOMAR’s future success if I am fortunate enough to serve as a council member.

Thank you!


2 October 2012


There’s no question that marketers are more focused than ever on the ROI of marketing research.  All too often, however, it seems that efforts to improve ROI aim to get more research per dollar spent rather than better research. 

Better survey design is one sure way to improve the ROI of marketing research.  However, despite advances in our understanding of the cognitive processes involved in answering surveys, market researchers continue to write poor survey questions that may introduce considerable measurement error. 

I think this is due in part to the fact that the processes involved in asking a question are fundamentally different from the processes involved in answering that same question.  Recent contributions to our understanding of the answering process have been integrated into a theory of survey response by Roger Tourangeau, Lance J. Rips, and Kenneth Rasinski (The Psychology of Survey Response, Cambridge University Press, 2000).  According to Tourangeau, et. al., answering a survey question involves four related processes:  comprehending the question; retrieving relevant information from memory, evaluating the retrieved information, and matching the internally generated answer to the available responses in the survey question.

“Think aloud” pretesting, sometimes known as “cognitive” pretesting or “concurrent protocol analysis” is an important tool for improving the quality of survey questions, and  well-designed think aloud pretests often have, been in my experience, the difference between research that impacts a firm’s business results and research that ends up on the shelf for lack of confidence in the findings.

Warning–what follows is blatant self-promotion of a sort.  ESOMAR is offering my workshop, “Think like a respondent:  A cognitive approach to designing and testing online questionnaires” as part of Congress 2011.  The workshop is scheduled for Sunday, September 18, 2011. This year’s Congress will be held in Amsterdam.  I’ve offered the workshop once before, at the ESOMAR Online Conference in Berlin last October.

Hope to see you in Amsterdam.

In November of last year David Leonhardt, an economics writer for The New York Times, created an “interactive puzzle” that enabled readers to create a solution for reducing the federal deficit by $1.3 trillion (or therebouts) in 2030.  A variety of options involving either spending cuts or tax increases that reflected the recommendations of the deficit reduction commission were offered, along with the size of the reduction associated with each option.  Visitors to the puzzle simplyselected various options until they achieved the targeted reduction.

The options represented trade-offs, the simplest being that between cutting programs or raising revenues.  Someone has to suffer, and suffering was not evenly distributed across the options.  Nearly seven thousand Twitter users completed the puzzle, and Leonhardt has summarized the choices.  You might still be able to access the puzzle online at

Leonhardt was able to group the solutions according to whether they seemed to consist mostly of spending cuts or a mix of spending cuts and tax increases.  He admits that the “sample” is not scientific and, given that it’s comprised of Twitter users, may skew young.  Unfortunately, no personal data was collected from those who completed the puzzle, so we’re left to speculate about the patterns of choices.  Perhaps a little data mining would shed some additional light on the clustering of responses. 

Even though this is not survey resarch in the way that we know it, there may be much value in using this type of puzzle to measure public opinion about the tough choices that the U.S. is facing.  The typical opinion survey might ask respondents whether they “favor” one course of action or another (“Do you favor spending cuts or tax increases for reducing the deficit?”).  The options presented in Leonhardt’s puzzle represent real policy choices, and the differences between them force you to consider the trade-offs you are willing to make.  While the choices were comprehensive, they were not contrived in the way that conjoint analysis structures choices; that might present a problem if we are trying to develop a model to predict or explain preferences.

There’s no reason this technique cannot be used with the same kinds of samples that we obtain for much online survey research.  Add a few demographic and political orientation questions and you have what I think could be a powerful way to capture the trade-offs that the public is willing to make.

Copyright 2011 by David G. Bakken.  All rights reserved.

If you were lisitening to NPR’s “All Things Considered” broadcast on January 18, you might have heard a brief report on research that reveals regional differences (“dialects”) in word usage, spellings, slang and abbreviations in Twitter postings.  For example, Northern and Southern California use spelling variants koo and coo to mean “cool.”

Finding regional differences in these written expressions is interesting in its own right, but I’ve just finished reading the paper describing this research and there’s a lot more going on here than simply counting and comparing expressions across different geographic regions.  The paper is an excellent example of what market researchers might do to analyze social media.

The study authors–Jacob Eisenstein, Brendan O’Connor, Noah A. Smith, and Eric P. Xing–are affiliated with the School of Computer Science at Carnegie Mellon University (Eisenstein, who was interviewed for the ATC broadcast, is a postdoctoral fellow).  They set out to develop a latent variable model to predict an author’s geographic location from the characteristics of text messages.  As they point out, there work is unique in that they use raw text data (although “tokenized”) as input to the modeling.  They develop and compare a few different models, including a “geographic topic model” that incorporates the interaction between base topics (such as sports) and an author’s geographic location as well as additional latent variable models:  a “mixture of unigrams” (model assumes a single topic) and a “supervised linear Dirichlet allocation.”    If you have not yet figured it out, the models, as described, use statistical machine learning methods.  That means that some of the terminology may be unfamiliar to market researchers, but the description of the algorithm for the geographic topic model resembles the hierarchical Bayesian methods using the Gibb’s sampler that have come into fairly wide use in market research (especially for choice-based conjoint analysis).

This research is important for market research because it demonstrates a method for estimating characteristics of individual authors from the characteristics of their social media postings.  While we have not exhausted the potential of simpler methods (frequency and sentiment analyses, for example), this looks like the future of social media analysis for marketing.

Copyright 2011 by David G. Bakken.  All rights reserved.

There’s an interesting article by Jonah Lehrer in the Dec. 13 issue of The New Yorker, “The Truth Wears Off:  Is there something wrong with the scientific method?” Lehrer reports that a growing number of scientists are concerned about what psychologist Joseph Banks Rhine termed the “decline effect.”  In a nutshell, the “decline effect” is an observed tendency for the size of an observed effect to decline over the course of studies attempting to replicate that effect.  Lehrer cites examples from studies of the clinical outcomes for a class of once-promising antipsychotic drugs as well as from more theoretical research.  This is a scary situation given the inferential nature of most scientific research.  Each set of observations represents an opportunity to disconfirm a hypothesis.  As long as subsequent observations don’t lead to disconfirmation, our confidence in the hypothesis grows.  The decline effect suggests that replication is more likely, over time, to disconfirm a hypothesis than not.  Under those circumstances, it’s hard to develop sound theory.

Given that market researchers apply much of the same reasoning as scientists in deciding what’s an effect and what isn’t, the decline effect is a serious threat to creating customer knowledge and making evidence-based marketing decisions. (more…)

Declining response rates have a been a problem in survey research for a long time.  Now a study by Lori Foster Thompson of North Carolina State University, Zhen Zhang of Arizona State University, and Richard D. Arvey of National University of Singapore, there may be a genetic predisposition to decline to participate in surveys.  Or maybe not.

The study, “Genetic underpinnings of survey response,” is to be published in the Journal of Organizational Behavior. A press release  from North Carolina State University quotes Dr. Foster:  “We wanted to know whether people are genetically predisposed to ignore requests for survey participation.  We found that there is a pretty strong genetic predisposition to not reply to surveys.”

The researchers  sent a survey to more that 1,000 sets of twins, some identical (and possessing identical DNA) and some fraternal (no more genetically similar than any two siblings).  The study found that the it was possible to predict the propensity to respond for one identical twin from the response (or non-response) of the other twin, but there was no such relationship for the fraternal twins.  The researchers “used quantitative genetic techniques to estimate the genetic, shared environmental, and nonshared environmental effects on people’s compliance with the request for survey participation” according to the paper abstract.

Notwithstanding the power of the right statistical methods, it’s very difficult to rule out plausible rival hypotheses in single generation familial inheritance studies.  I spent one summer during graduate school analyzing data from an adoption study attempting to prove the heritability of schizophrenia.  In addition to the adoption paradigm (that is, looking for differential incidence rates  among the biological and adoptive relatives of the adopted, afflicted individual), we have two types of twin studies–those that compare identical twins reared apart and those that compare sets of identical twins with sets of fraternal twins, as in this case.  Twins reared apart studies got a bad rap as a result of Cyril Burt’s fraudulent data purporting to show the heritability of intelligence.  Comparisons of identical and fraternal twins run up against the fact that having an identical twin is a very different experience from haing a fraternal twin.

I see two potential problems with this study.  First, we can’t rule out differences in interaction between identical twins and fraternal twins as a possible explanation.

The second problem–all genes are expressed at a cellular level in the form of different proteins.  Survey non-response, in contrast, is a specific and high order (far removed from cellular activity) that, really, is unlikely to be governed by a a few small chemical differences.  I believe that anyone making a claim about the heritability of any behavior ought to suggest a plausible cellular mechanism.  It’s also desirable to have some plausible selective pressure that would favor such a genetic predisposition.  Given that survey taking is a relatively recent (in human history) activity, I’m not sure you can make a case for any selective advantage in refusing to participate in surveys.

Maybe–and it’s a big maybe–there’s a selective advantage in some cluster of behaviors–such as cooperation–that just happens to manifest itself in propensity to take surveys.  That might be plausible.  Perhaps the authors offer that explanation in the full paper.  We’ll have to see.

Copyright 2010 by David G. Bakken.  All rights reserved.