In the excellent online magazine The Edge (www.edge.org), Gerd Gigerenzer, director of the Center for Adaptive Behavior and
Cognition at the Max Planck Institute for Human Development in Berlin, poses a riddle about risk: Imagine that a 40-year-old
woman has her first mammogram and it comes back positive. The incidence of the disease in her age group is 1 percent. The
test is 90 percent accurate, and it has a false-positive rate of 9 percent. What's the probability that the woman has cancer?
Patrick Clinton, Editor-in-Chief
Gigerenzer has presented his puzzle to many physicians, including some with years of experience in mammography. Their answers?
About a third think the probability is 90 percent. Another third think it's between 50 and 80. The rest say it's between 1
and 10 percent.
And the real answer? Gigenrenzer explains it this way: "Think about 100 women. One of them has breast cancer. This was the
1 percent. She likely tests positive; that's the 90 percent. Out of 99 who don't have breast cancer another 9 or 10 will test
positive. So we have one in 9 or 10 who tests positive. How many of them actually has cancer? One out of ten."
The explanation is elegant, and as the pharma industry comes under increasing pressure to explain risk, we'll need to hear
a lot from folks like Gigenrenzer, who devote their lives to bridging the gap between numbers and people. But for the industry,
the puzzle-and Gigenrenzer's argument-raise some important issues.
First, the goal that patients should be able to make informed decisions about treatments is a good one. But if Gigerenzer
is right about doctors' ability to handle numbers, we're missing a link in the chain. Labeling and advertising can convey
facts about benefits, side effects, and the like, but if doctors aren't committed to the idea of analyzing risk, it's unlikely
that patients will make good decisions. Who will take on the task of educating innumerate doctors? I don't think pharma wants
the job, but the industry may end up having to take it on anyway: If patients make bad decisions, we all know where the blame
will ultimately fall.
Second, if we want to go beyond mere compliance and truly communicate risk, we'll be getting into unexplored territory. "According
to decision theory, rational decisions are made according to the so-called expected utility calculus," Gigenrenzer writes.
"In economics, for instance, the idea is that if you make an important decision - whom to marry or what stock to buy- you
look at all the consequences of each decision, attach a probability to these consequences, attach a value, and sum them up,
choosing the optimal, highest expected value or expected utility." In theory it's how we all make decisions. But in practice,
we don't, says Gigenrenzer. To improve our decision making we need what he calls "frugal heuristics"-mental tools for simplifying
the calculation. Those heuristics are an area where cognitive scientists are breaking ground, and what may prove crucial for
working with patients.
Finally, as Gigenrenzer points out, there's a big picture. Risk assessment isn't just a regulatory strategy. It's a way for
human beings to cope with the world and its uncertainties. It's more accurate, more realistic, than the sorts of false certainties
that patients-and doctors, teachers, and politicians-tend to rely on today. It's good independent of its value in getting
products to market and fending off legal problems.
As a risk approach becomes more and more a part of pharma's reality, here's a sustaining thought: Maybe this isn't just a
regulatory fad. Maybe it's part of an evolutionary step forward in the intellectual life of the species. What are the odds?
That, of course, is partly in our hands.
Patrick Clinton, Editor-in-Chief firstname.lastname@example.org