OR WAIT 15 SECS
There's a temptation for companies to view predictive markets as a silver bullet. Can crowds actually do a better job than "experts," or is that viewpoint more of a fad than a foundation for accurate future forecasting?
The pharmaceutical industry spends a lot of time and money trying to predict how its products will be embraced. Yet given the complex nature of the marketplace, forecasts based on self-stated interest are often wrong. Take Exubera. Heralded worldwide as a diabetes savior, it never gained market traction and fell flat just two years post-launch. By contrast, Lipitor was expected to do reasonably well-but nobody knew it would become the biggest-selling drug of all time. How could so many people be so wrong? Is it that hard to “get it right?”
Yes. That’s why researchers are hired to harness the moving parts of the marketplace and help translate raw data into viable business decisions. For many profitable decades, Big Pharma’s marketing has been guided by tools and techniques such as personal interviews, surveys, focus groups, and conjoint analysis.
Recently, forecasting has received a lot of attention, as we continue to experiment with new, innovative modeling tools-the most exciting being predictive markets. There is a great temptation for companies to view predictive markets as a silver bullet. But is it? Can crowds actually do a better job than “experts,” or is that viewpoint more of a fad than a foundation for accurate future forecasting?
My Brother’s Keeper?
Forecasts are traditionally based on preference share-the average self-reported likelihood to buy or adopt a product in a survey. But researchers frequently discount that share by as much as 50 percent, due to the widely held belief that what markets actually do is different from what survey respondents say they will do.
Long used in elections, predictive markets have relied on valuing individual respondents as observers of the universe in which they live or practice-principles articulated in The Wisdom of Crowds, by James Surowiecki. So enticing is the promise of aggregate data that these predictive markets have entered the public arena through such media as TradeSports, the Iowa Electronic Markets, NewsFutures, Hollywood Stock Exchange, and Bet2Give, along with the more academic-based eLab at the Sloan Center for Internet Retailing at the University of California. (Across all of these interactive forums, data is based on individual predictions of group behavior measured by “putting your money where your mouth is.”)
Are these predictions valid? If so, under what circumstances, and to whose authority, should we trust the results? In 2007, Eidetics began exploring the reliability of predictive markets’ aggregation philosophy by asking observer-versus-data-point questions in selected quantitative research projects to supplement conventional forecasting methods. To date, we have 15 primary market research studies under our belts.
We also fielded an independent methodology research study using a hypothetical chemotherapeutic agent for non-small cell lung cancer. In this study, we asked oncologists two sets of product-interest questions: one based on typical patient allocation exercises, and another based on principles from The Wisdom of Crowds. We explored three different product profiles using two questions: How much would you use this product for your patients? How much do you think your peers would use it?
What We Learned 1. Predictive market research results are, in fact, “tighter” (less variable) than those generated by traditional choice modeling studies. Using oncologists as “observers” (versus individual “data points”) generated denser clusters of answers: researchers could be more certain of their results with fewer respondents. So, predictive market research results could help save recruiting time and money for fixed, targeted products requiring a quick turnaround.
2. The context and order in which questions are asked makes a significantdifference in the outcome. In the aggregate, predictive market questions yielded more optimistic responses than those based on classic choice modeling; oncologists thought their peers would show higher overall preference shares than the mean of their individual responses. Further, asking predictive questions before the individual allocations resulted in higher estimations than when the questions were asked in the reverse order. (However, in several of our client studies, the reverse has been true: estimates of peer use are lower than estimates of personal use. This interesting finding bears more exploration. In the meantime, we are comfortable saying that these aggregate methods require particularly careful design and interpretation.)
3.Physician responses to predictive markets questions are related to their own treatment approaches and attitudes about the market. Oncologists who described themselves as “early adopters” also expect their new product usage to be higher than that of their peers (i.e., the self-reported interest yields preference shares higher than the predictive market shares). Oncologists who estimated their use higher than that of their peers were twice as interested in this hypothetical product than was the rest of the sample.
The study also highlighted some intriguing aspects of oncologists as a target group. We found that:
Beyond these specific responses, we see evidence in both this and other studies that clinician aggressiveness and the clinical urgency of a specific setting together shape personal usage behavior. An “action-oriented” physician faced with a challenging case is likely to consider trying a new product. When asked to consider the usage of the same product by a broad array of physicians across a range of cases, estimated usage is likely to be lower. In addition, we see some evidence that specialists and generalists have different patterns of self-assessment and “crowd assessment.”
A Note of Caution
Based on our experience, we think the best ways to use predictive market research vary considerably, based on the specific therapeutic category to which it’s applied. Predictive markets seem best used for short timeline, fixed-product, targeted client questions. Even then, we’d caution that there is a potential for overestimation, yielding vulnerable data.
Researchers must understand the clinical and commercial contexts in order to fine-tune relevant and reliable market-based questions. One of our next inquiries will explore how a product’s context affects answers to different questions (e.g., are hot areas like oncology more prone to exuberance than more stable categories like hypertension?)
Looking ahead, we’re keeping an eye on the methodology and strongly recommend that pharmaceutical researchers continue to evaluate both the science and “engineering” of this interesting technique. In the right hands, predictive markets can help make better and more timely decisions –– and fulfill the ever-present demand that forecasts “get it right.”