After I posted my brief review of The Predictioneer’s Game by Bruce Bueno de Mesquita, I discovered that you can explore the “game” in more detail at Bueno de Mesquita’s website, http://www.predictioneersgame.com/game.

The Preditioneer’s Game:  Using the Logic of Brazen Self-Interest to See and Shape the Future by Bruce Bueno de Mesquita makes a pretty strong case for using models to make critical decisions, whether in business or international policy.  To anyone involved in prediction science, Bueno de Mesquita’s claim of 90% accuracy (“According to a declassified CIA assessment…”) might seem an exaggeration.  But the author has two things in his favor.  He limits his efforts at prediction to a specific type of problem, and he’s predicting outcomes for which there is usually a limited set of possibilities (for example, whether or not a bank will issue a fraudulent financial report in a given year).  (more…)

I had the pleasure of participating in a lively discussion on the impact and future of “DIY” (do-it-yourself) research a few weeks ago at the recent ESOMAR Congress in Athens, Greece.  In a 90-minute “discussion space” session I shared a few thoughts about the future of the market research industry.  The other half of the program was presented by Lucy Davison of marketing consultancy Keen as Mustard and Richard Thornton of CINT.  They shared the results of some research on DIY research that they conducted among consumers of market research (i.e. “clients”).  Bottom line, many clients are favorable to DIY for a number of reasons.

For my part, I am more interested in DIY as a symptom of deep and fundamental change in the market research industry.  When I began my career in MR (on the client side at first), most research companies were vertically integrated, owning their own data collection capabilities and developing their own CATI software, for example.  This made sense when the ability to coordinate and integrate the diverse activities required for a typical research project was a competitive strength.  Perhaps you remember the days when a strategic segmentation study might have three or four phases, take six to nine months to complete, and cost $500,000 ( in 1980 dollars!).  But vertically integrated industries tend to “de-integrate” over time.  Firms may spin off or outsource some of their capabilities, creating value chain specialists who are proficient at one link in the chain.  The emergence of WATS call-centers and off-the-shelf CATI software were early steps on the march towards de-integration for the MR industry.

Technological change (especially in the form of disruptive innovation) also provides opportunity for new entrants.  Sure, some of the face-to-face interviewing companies made the transition to telephone, and many telephone interviewing companies successfully converted from paper and pencil questionnaires to CATI, but each of these shifts provided a point of entry for new players.

The large, integrated firms have managed to hang on to a substantial share of industry profits, but there are three looming threats.  The first is (so-called) “commoditization”–the downward pressure on pricing.  While some supplier side researchers complain that clients are unwilling to pay for quality, this downward pressure is the result of basic competitive dynamics:  there are many competing firms, few barriers to entry, many substitutes (e.g., transactional datamining) and not that much difference in value propositions or business models across MR firms.

The second threat is do-it-yourself research.  At the moment, DIY appeals to the least demanding and most price sensitive customers.  DIY removes the access and affordability barriers, thereby democratizing survey researchAs Lucy and Richard’s research showed, customers like the low cost, speed and convenience of DIY, and I expect many will move up the learning curve quickly.  I hope so–many of the DIY surveys I’ve seen from even big companies have been pretty ghastly. 

The last threat to the traditional MR business model comes from the sheer deluge of data generate by both commercial and non-commercial online activity.  How much could Google tell any marketer about customer preferences based just on search data, for example?

At the end of the session in Athens I offered this analogy.  Imagine that you need a bedstead.  You could go to a furniture store and choose from a selection of attractive, well-constructed and expensive bedsteads.  Or you could go to the local home improvement store, purchase some plywood and paint or stain and with a few tools (which could be borrowed or renterd) and some minimal ability, construct a perfectly serviceable platform bed–at much lower cost.  This represents the difference between the full service integrated research firms at the top of the latter and what we’ve historically thought of as do-it-yourself market research.  The gap between the two has been sustained until now by a skill barrier and limited access to better, easier to use tools.  This is the gap that Ikea filled in the home furnishing market by creating a new business model based on attractive, customer-assembled furnishings. 

Unfortunately for the incumbent research firms, this kind of business model innovation does not often come from the current players in a market.  The incumbents have too much personal investment in the current business model.  Let’s face it–most of us are in market research because we like the high-touch, intellectual problem solving that’s involved.  It’s what we’ve trained to do.  Designing something like appealing flatpack furniture that customers take home and assemble themselves just does not fit our self-image.

The smarter, easier to use tools are here.  Who will be the first to package them into a new way to deliver market research?

Copyright 2010 by David G. Bakken.  All rights reserved.

I’m happy to announce that my paper, “Riding the Value Shift in Market Research:  Only the Paranoid Survive,” received the Fernanda Monti Award for Best Overall Paper at the 2010 ESOMAR Congress that took place in Athens on September 16.  More Info.

The current issue of The Economist carries an article titled, “Riders on a swarm.”  The article describes the use of swarm intelligence–the collective behavior that results from the individual actions of many simple “agents”–that is inspired by the behavior of insects like ants and bees or flocks of birds.  Although–unlike a column that appeared in a previous issue –“agent-based simulation” is not mentioned by name, these models have all of the relevant attributes of agent-based simulations, and you can find example models of collective insect and flocking bird behavior in agent-based toolkits such as NetLogo

As noted in the article, these models have found some business applications in logistics and problems like traffic control.  Ant-based foraging models, for example, have been applied to solving routing problems for package delivery services.  Route optimization, given a set of delivery locations, is a fixed problem with a large number of potential solutions that probably can be solved analytically (or by simple brute force) with enough computing power.  Swarm models have the advantage that they can arrive at a good and often optimal solution without needed to specify and solve a linear programming problem.  By programming simple individual agents, such as artificial ants, with a simple set of rules for interacting with their environment and a set of goal-directed behaviors, the system can arrive at an optimal solution, even though no individual agent “solves” the problem. 

Something that was new to me in this article is “particle swarm optimization” (PSO) which is inspired by the behavior or flocking birds and swarming bees.  According to the article, PSO was invented in the 1990’s by James Kennedy and Russell Eberhart.   Unlike the logistics problems, there may be no closed form or analytically tractable solution to problems such as finding the optimal shape for an airplane wing.  In that case, a simulation in which thousands of tiny flowing particles follow a few simple movement rules may be just the ticket.

This stuff is fascinating, but it’s not clear that there are many useful applications for this type of modeling in marketing or marketing research, at least as long as the unit of analysis is the intersection of an individual “consumer” and a specific purchase or consumption occasion.   Of course, if imitation and social contagion are at least as important in our purchase decisions as the intrinsic attributes of t products and services (as research by Duncan Watts and his collaborators has shown in the case of popular music), then agent-based simulations may turn out to be one of the best ways to understand and predict consumer behavior.

Copyright 2010 by David G. Bakken.  All rights reserved.

I’ll be speaking at the upcoming ESOMAR Congress (Athens, Greece, 12-15 September 2010).  You can find an abstract of my presentation, “Riding the Value Shift in Market Research:  ‘Only the Paranoid Survive'” by clicking here.  Click here to see the full conference program.

The “economic focus” column in the July 24th-30th (2010) issue of The Economist is titled “Agents of change.”   As many of us have come to believe over the past couple of years, the “dynamic stochastic general equilibrium” (DSGE) economic models used by central banks and other economists more or less fell apart when it came to predicting or anticipating the credit-fueled meltdown that we are just now beginning to recover from.  The Economist reports on a June workshop sponsored by the National Science Founcation and attended by central bank economists (from the Fed and the Bank of England), policy advisors, and computers scientists who convened to explore the potential of agent-based models of the economy.

Agent-based models have emerged from the intersection of computer science and social science and have been applied to population dynamics, epidemiology, species extinction, wealth creation, the formation of communication networks, and a host of other problems not well served by traditional economic models.  In contrast to the DSGE approach, which represents the economy as a series of equations to be solved using highly aggregated data as inputs, agent-based models of economic systems are “bottom up”–they generate complex behavior by creating  populations of autonomous agents, giving them simple behavioral rules, and then simulating (over thousands of iterations in many cases) the interactions of these agents.  Under some starting conditions, an agent-based simulation may produce results similar to a DSGE model.  For example, Joshua Epstein and Robert Axtell (one of the NSF workshop organizers) found that agents operating under rules that permitted bargaining for the exchange of two commodities arrived at prices that fluctuated around a sort-of equilibrium point.  By the way, their book, Growing Artificial Societies:  Social Science from the Bottom Up (1996), is one of the best introductions to agent-based modeling.

In some ways, agent-based models of the ecomony are generating new interdisciplinary thinking.  In an Op-Ed piece in the New York Times in October of 2008, Mark Buchanan (a theoretical physicist)  titled “The Economy Does Not Compute,” we learn about an agent-based model developed by Yale economist John Geanakoplos and two physicists, Doyne Farmer (another of the NSF workshop organizers) and Stephan Turner designed to explore the influence of the level of credit or leverage in a market on the market’s overall stability.   A typical objective for an agent-based model is to develop an understanding of the sensitivity of a complex system to changes in one or more of the model variables, and these researchers found that greater levels of credit leads to greater interdependence among the actors (or agents) and this pushes the market toward instability.  The DSGE models are not very good at capturing this kind of process.  And this model revealed something even more striking, but perhaps not surprising to those who have used agent-based models to capture the non-linear nature of complext adaptive systems.  What Geanakopolos, Farmer and Turner found is that the leverage-induced market instability does not develop gradually but arrives suddenly–with the economy essentially falling off a cliff.

Buchanan goes on to cite two other applications of agent-based modeling.  One involved testing the impact of small transaction taxes in foreign exchange markets, and the second looked at deregulation in a state’s electricity market.  In both cases, the simulations provided insight that challenged the prevailing ideology-based assumptions, and could lead to better policy outcomes.

I’m not sure how readily mainstream economists or central bankers will embrace the agent-based way of thinking, but it’s pretty clear that at a minimum, agent-based approaches to understanding complex systems like the economy can only add to our ability to make better policy decisions.

Copyright 2010 by David G. Bakken.  All rights reserved.