As I noted in my last post, the American Marketing Association’s Advanced Research Techniques Forum took place in San Francisco the second week in June (June 6-9).  The program is an intentional mix of presentations from academic researchers and market research practitioners.  While the practitioner presentations are often more interesting, at least from the standpoint of a fellow practitioner, this year the best and most useful presentations either came from the academic side or had significant contribution from one or more academic researchers.  In that last post I wrote about three papers that explored different aspects of social media.  Three more papers from this year’s ART make my list of the most worthwhile presentations.

First up is “How to Control for Successive Sample Selection and When Does it Matter for Management Decisions,”  presented by Thomas Otter of Goethe University in Frankfurt and his student Stephan Wachtel.   Thomas Otter is one of the smarted people I know and he consistently tackles tough problems.  In this case the “problem” stems from the fact that we often try to make inferences on samples where we have successively “filtered” out people.  The result is that we have omitted some covariates of buyer decision-making.  Consider, for example, a concept test where potential respondents are screened based on some minimum likelihood purchasing at least the general type of product or service under consideration.  Marketers obviously want to maximize the return on their research investment, and it makes sense to focus on consumers who are the most likely prospects.  However, as Wachtel and Otter demonstrate, that approach throws away a lot of information about how those consumers become likely prospects.  They attack this problem by developing a model to infer “what drives the probability of passing the filter(s)”.  The model takes into account both successive selection and unobserved drivers.  This type of modeling is not for the faint-heared, but it does illustrate the power and flexibility of Bayesian thinking when it comes to seemingly untractable problems involving unobserved variables or data values.  This paper also exemplifies the emerging trend of looking beyond the traditional marketing research statistical approaches.  The authors represent the system under consideration as a directed acyclic graph (DAG)–something borrowed from graph theory.

Anocha Aribarg (University of Michigan) may have advanced conjoint analysis a bit with her paper “Measuring the Effect of Contextual Variables on Preference Using Conjoint Analysis” (with co-authors Yimin Liu of Ford Motor Company and Richard Gonzales of University of Michigan).  Conjoint analysis is a generally powerful technique for “decomposing” choices among multi-attribute alternatives into separate utility values for each attribute describing the alternatives.  However, we usually have to assume that these utility estimates are constant over different choice situations.  Put differently, we must assume that the utility structure is independent of external or context effects.  There are many different ways that we might incorporate context effects into conjoint models.  Some context effects might be due to individual differences and captured indirectly by a disaggregate model or more explicitly in the form of covariates.  We might try to systematically vary the context for the choices.  We could, for example, introduce a “covariate attribute” into the conjoint design, or we could repeat the experiment under different contextual specifications.  Anocha and her co-authors offer a somewhat more elegant solution in the form of a “regime-shift” model (adapted from a multiple change point model).  The nice thing about their model specification, in my view, is that it appears to handle contextual evolution along a continuous dimension, such as the effect of increasing gas prices on preferences for gas/electric hybrid vehicles.

Finally, in the presentation that was voted best overall by the attendees, Qing Liu (University of Wisconsin-Madison) presented her research on “Efficient Designs for a Non-Compensatory Choice Model” (co-authored by Neeraj Arora, also of University of Wisconsin-Madison).  Over the last few years non-compensatory choice processes–in effect, any choice process other than a pure utilitarian trade-off between alternatives–have been explored and various methods for dealing with non-compensatory models developed by both practitioners and academic researchers.  In most approaches, consumers are assumed to apply one or more screening rules to a set of alternatives and then select one among the remaining choices that passed the screening.  Most often, the final selection process is assumed to be compensatory or utility maximizing.

Solutions to the problem of noncompensatory choice processes (which are not captured by the typical multinomial logit model used to estimate utilities) tend to take one of two forms:  change the data collection or change the model specification.  Examples of changing the data collection include adaptive choice-based conjoint from Sawtooth Software and various preference elicitation methods developed by John Hauser of MIT and his colleagues and Ely Dahan of UCLA.  The modeling approach is best represented by the Gilbride and Allenby model that can handle multiple screening rules.  Qing’s paper is an extension of this model, with a focus on the design of the choice experiments.  In a nutshell, the design of the experiment makes a difference in our ability to estimate models with screening rules.  Perhaps the single most important conclusion:  more alternatives in a choice set are better than fewer.

All in all, a pretty good program at this year’s ART Forum.

Copyright 2010 by David G. Bakken.  All rights reserved.

About these ads