The 20th occurrence of the Advanced Research Techniques Forum, an annual conference sponsored by the American Marketing Association, took place in San Francisco a couple of weeks ago (June 6-9).  For those of you not familiar with A/R/T, this conference brings academic researchers together with market research practitioners in a format that produces (nearly) equal representation of contributions from each of these two groups.  Half of the twenty presentation slots are reserved for “practitioner” papers (where the lead author is not an academic researcher) and half are held for papers from academics.  One of these academic slots is assigned to the winner of the annual Paul Green award for the best article published in Journal of Marketing Research in the previous calendar year.  More papers than in the past are collaborations between academics and practitioners, and choice of one or the other as lead author can impact the chances of getting on the program given the limited number of slots.

The program is assembled by a committee comprised of academics and practitioners (disclaimer–I’ve been on the committee a few times and was program chair for 2008).  In a typical year, the call for papers might yield around 70 submissions.   In addition to the presented papers, “poster” presentations are considered, and the program includes optional tutorials (extra cost) before and after the main conference sessions.

The A/R/T papers, especially those presented by academic researchers, can be dragged down by the weight of too much algebra.  Over the years, the “advanced” has more often referred to “models” than to “research techniques” in general, and this year was no exception.  Still, there were a few presentations that are noteworthy.

The first sessions of the conference were devoted to modeling media impact in general and the impact of social media in particular.  One general conclusion from these four papers–it is not easy to define what we mean by the impact of “social media”, let alone measure or model the impact of social media on sales.  Lynd Bacon (Loma Buena Associates), Danielle Murray (Shutterfly) and Peter Lenk (University of Michigan) co-authored a paper describing the use of Item Response Theory (IRT) to develop a scale for measuring user engagement with a social media platform.  We are seeing more use in market research of scaling methods (IRT and “max-diff” are prime examples) that are not based on interval rating scales, and this is an interesting application.  The basic idea here is that some types of behavior (e.g., creating content) represent higher levels of engagement or involvement with the platform than do other behaviors (e.g., simply uploading photos), and IRT offers a way to estimate the positions of different individuals on an underlying involvement dimension.

In the next paper, Wendy Moe (University of Maryland) presented research in which she explored the “social dynamics” of online product ratings forums.  She found that the “valence” of reviews (whether they are positive or negative) is influenced by the dynamics of the sites where the reviews are posted.  Specifically, higher volumes of ratings leads to more polarization of ratings.  Her findings suggest to me that the social dynamics of the ratings/reviewing environment may be just as important as the intrinsic merits of the products or services being rated.

There are lots of challenges in measuring the impact of social media beyond typical problems of marketing mix models.   Doug Bowman (Emory University) along with co-authors Manish Tripathi (Emory), Dan Young (P&G), Melinda Smith de Borrero (TNS Worldwide) and Natasha Stevens (TNS-Cymphony) described efforts to explicitly incorporate social media activity into marketing mix models.  In my view, the biggest problem with social media activity (represented in this case by blogs, product forums, and user groups) is quantifying variability in the activity.  These authors identify four metrics: buzz volume (i.e., incidence of brand name mentions), sentiment (incidence of positive, negative, and neutral mentions), structured listening (i.e., incidence of a priori defined keywords), and unstructured listening (words and phrases that are not pre-specified but detected by manually examining samples of activity).  Automated content analysis, especially with respect to detecting sentiment, is error-prone.  Still, for purposes of improving marketing mix models, they may be “good enough,” and these authors conclude that adding social media variables improves the accuracy of a “traditional” marketing mix model.

Next time I’ll describe three more presentations that I found especially useful.

Copyright 2010 by David G. Bakken.  All rights reserved.

Advertisements