FDA vs. the Individual
Who should make decisions about drug safety—FDA or patients and doctors? In this excerpt from his important new book Overdose, the renowned (and ever controversial) legal scholar Richard A. Epstein argues that the current system overvalues risk, ignores individual differences, and needlessly deprives patients of valuable treatments.
It is well known that the FDA has authority to regulate not only for purity, but also for safety and effectiveness. On the first of these questions no one challenges the role of the FDA, either inside the industry or beyond it. But the issues about safety and effectiveness raise serious questions of institutional design, which are not well handled under the current regulatory regime. The key question is this: Where should decisions about drug safety and efficacy be made, upstream by the FDA or downstream by the individual drug user, aided by professional assistance?
If the FDA decides to let a drug onto the market, no one is obliged to use it. Any mistake to permit the sale of a drug is therefore subject to downstream correction by individual users. But a decision to keep the drug off the market is impervious to downstream correction by individual users. In the alternative, the FDA could issue warnings, perhaps very severe warnings, as to the risks associated with product use.
The simple point is that despite their apparent affinity, warnings and bans start from different places within the legal firmament. The critical reason for distinguishing between them is that in the population at large, individuals will have widely varying perceptions of risk and widely varying willingness to undertake it. In other words, some who have received the warnings may nonetheless decide to run the risk, while others may not.
Of course, the warnings could well influence how that decision is made, but even the most imposing of warnings can be disregarded, especially by people who have already used a drug which they found helpful for their own condition.
If it is true that on balance the use of a drug does more good than harm, then a warning is socially destructive if it reduces use below the optimal level. The market information can be as much distorted by overgloomy predictions that lead to excessive caution as it is by exuberant industry-wide promotion that leads to excessive use. Both kinds of errors have to be taken into account.
Moreover, it is absolutely critical to understand that the FDA does not have any monopoly over warnings. Individual physicians and patients can consult other references that deal with drug interactions, read Web sites devoted to particular diseases or conditions, or throw darts at a target to make their decisions. Let the FDA warn away, and it will still be subject to competitive pressures from other individuals and groups that have their own testimonies and judgments to offer.
The emergence of any such voluntary market in warnings is yet another reason to deny the FDA any comprehensive power to ban, or even to issue black-box warnings that raise the costs of product distribution. The sources of information available—the FDA, drug companies, Public Citizen, and physicians groups—should in aggregate improve the overall level of decision making.
Owing to the possibility of imperfect competition in the warning markets, the hard question is, Why believe that the FDA is so reliable in its judgments that it should be allowed to make decisions another individual cannot reverse?
TWO KINDS OF ERROR
It is critical to begin with the standard distinction between Type I and Type II error, which is rightly stressed in virtually all works critical of the FDA chokehold on drug release. Type I error arises when a drug that should be kept off the market is allowed onto the market, where it causes visible harm to its users. Type II error arises when a valuable drug is kept off the market, thereby making it impossible for sick individuals to benefit from its use.
As a matter of social welfare, the right decision should be to balance both types of errors, so as to minimize the total number of lives lost or seriously damaged. In making this decision, the mechanism of causation—whether the harm inflicted or benefit denied—should be regarded as utterly immaterial to the ban decision, for that is the way in which rational patients would so regard it.
Individuals seeking to maximize their expected utility will be willing (in the simplest case) to take a 90 percent risk of death from a given course of action if they believe (ignoring any increment in treatment cost) that the risk of death without that action is 91 percent. That is the decision that they would make with last-ditch surgery, when the FDA is nowhere to be found. It is the same decision people should be allowed to make with drug use.
The key question is whether FDA has incentives to make choices that capture the desire of any individual to maximize expected utility of alternative courses of treatment (or, as we laymen say, give us the best chance to get well).
The answer has long been understood to be no. The key lies in the difference between visible and nonvisible harm. When thalidomide leaves children with deformed limbs, harm is easy to see and easy to assign to a given therapeutic agent.
Its virtues, which turn out to be real, are largely ignored until tempers cool, when, rechristened Thalomid, it is reintroduced into the market as an effective treatment for an after-effect of leprosy.
The FDA catches all sorts of grief for decisions that let drugs on the market, which means that the agency rates Type I errors at a substantial multiple of Type II errors. The upshot is that too many products are kept from the market as the FDA attempts at all costs to avoid causing visible harm, without taking into account the losses that must be absorbed by delaying or removing products from the marketplace.
The calculus of costs and benefits routinely differs significantly across individuals, for any number of sensible reasons. The first and most obvious is that no two individuals are precisely alike in their need for or tolerance of particular lines of treatment.
The simple truth that dominates this principle applies to any attributes in all populations: for any trait in any population the variation around the mean is always greater than zero. In ordinary English, this means that all individuals in the population are not 5 feet 7 inches tall simply because the average person is of that height. It also means that subpopulations—men and women, for example—could have both different means and different variances about those means.
What is true of height is true of toleration of risk, of pain, of aspirin, or of any of the untold factors that go into making a medical decision. The question then is how the inescapable feature of heterogeneity in any population fits into the vital question of whether to prefer monopoly upstream control by the FDA relative to downstream individual control.
The next issue is whether these differences can be perceived and acted upon by individuals, either alone or in conjunction with their physicians. Medical questions are never trivial, even for patients who have access to professional help.
The competence of ordinary people to enter into contracts or to make personal decisions is usually not worth a second thought. But weakened capacity often is the critical issue for individuals of advanced age or declining health. That reality is in real tension with the sensible view, taken by courts in medical malpractice cases, that each individual has sole power to determine whether to submit to invasive surgery or any other treatment by his or her physician.
It is often claimed that ordinary individuals are not only unable to make the correct decisions, but they are equally unable to find suitable proxies who are capable of making good decisions for them. Matters are only made worse, the argument continues, because the pressure of modern medicine are such that physicians do not spend needed time with patients, even when it might improve overall performance. Worse still, there is not a viable system of medical malpractice or professional discipline to separate weak from able physicians.
My sense is that all these arguments, while true, are overdrawn, for their dreary tone cannot explain the many success stories in medical treatment over past years. But even if they were all correct on a descriptive level, they would not lead to any change in overall policy.
It is not possible to correct downstream errors by upstream interventions based on how some patients and physicians behave. It would represent most unwise policy to let the FDA make any decision to allow sale of a drug based on the competence of individual patients in the potential user pool. When it makes its upstream decision, any ban that it imposes, like the summer rain, falls on the competent and the incompetent alike.
Decisions on treatment choice are so intensely individual that they must be made downstream, not only for drug usage but also for any and all aspects of health care.
Heterogeneity of the overall population emerges as the critical issue. In their recent study on FDA approval policies, Anup Malani and Feifang Hu note that the FDA "employs a simple rule when deciding whether to approve a new drug for use by physicians: the average treatment effect of the new drug must be superior to the average effect of a placebo." They then criticize this model on the powerful ground that it does not lead to decisions that maximize the expected utility of drug usage.
By placing its focus on the average use, the FDA ignores the variation in individual responses. Even when the FDA or companies stratify patients into various cohorts, the problem is not eliminated. There could easily be wide variations even within the smaller classes. It could well be that on average a particular drug does not perform as well as a placebo. But so what? That only shows that most people should not take the drug, not that it should be banned.
The FDA has been criticized on this point from the vantage point of those who want tougher conditions for entry. Thus Arnold Relman and Marcia Angell, fierce critics of the drug industry, urge the FDA to ratchet up its policy of looking at averages yet another notch. "Unfortunately, the FDA will approve a me-too drug on the basis of clinical trials comparing it not with an older drug of the same type, but with a placebo or a drug of another type."
Looked at in its most favorable aspect, their proposal follows the flawed FDA methodology, with a heightened baseline, equal to the average performance of an approved drug. The effect is therefore to keep still more drugs off the market and thus to entrench the monopoly position of the initial entrant.
Moreover, the proposal ignores that drugs of equal efficacy may have pronounced variations because of allergy, intolerance, or other factors that make some better and some worse in individual cases. Worse, it forces the new entrant to meet a moving target. Its initial task is to match the performance of the first entrant. But that estimation will vary over time, making it difficult to know what level of performance will justify FDA approval.
The desired reforms on this issue move in the opposite direction from the ill-considered Relman-Angell proposal. The correct procedure treats this variation in individual response as critical. It first asks whether there is a significant fraction of cases in which the drug under review outperforms the placebo.
The answer to that question is likely to be negative if the mean response is well below that of the placebo and the variance in responses across individuals is small. But as the variance in individual response increases, the FDA's procedure is ever more likely to lead to incorrect results. It is even possible for the mean of the placebo to lie above the mean for the drug, even though some substantial fraction of the population is better off with the drug than without it.
An example might help illustrate the point. Suppose that we rate patient response to treatment on a scale of 0 to 100, where the current drug has an average response of 50 but a variation in responses from 25 to 75. Now put a new drug on the market that has an average response of 45, with a variance of 20 to 70. The question is whether the second drug should be allowed on the market, when each relevant parameter is 5 points below that of the original drug.
If all individuals have the identical rank order of response, then, sure, keep it off. Given that assumption, any person who scores X with the current drug will turn X minus 5 with the new one. Individual choice could only compound error.
But heterogeneity totally undermines that assumption. Now even though the whole curve has shifted to the left for the new drug, some fraction of individuals will score better with the new one than the old one. Since we don't know who these individuals are, we pay a high price in letting the entire patient population have only one choice instead of two.
At this point, the inquiry shifts to a second question: what knowledge is available about individual variations? If all that is known is that there is a variance among individual cases, without any knowledge of where particular individuals lie on the distribution, the harm from any ban is relatively slight. Since individuals do not know where they lie, as a first approximation, they will act as though they are located at the mean. They would thus make the same decision as the FDA.
At this point, the only loss from the ban stems from the attitude toward risk. If the FDA is risk averse, it will ban products with high variances that some gambling individuals would be prepared to take. In most cases, however, the elaborate systems of downstream control are put into place precisely because ordinary patients and their physicians can make an intelligent estimate of the patients' place on the distribution.
Just as physicians can determine without the FDA's assistance which individuals are good candidates for surgery, so they can also determine which are good candidates for any drug regime. In general, the ban makes sense as a matter of first principle only on highly restrictive assumptions in which patients will not only fail to process the available information but also stumble even if they can purchase the best assistance that money can buy. Hence, presumptively, the FDA should not have the power to ban at all, except in cases that deal with impure or adulterated products.
A MATTER OF DEFINITION
The case against the FDA power in this regard is only strengthened when one looks at the ostensible tests that are used to decide whether to put a ban in place. As we have seen, the operative terms in question deal with effectiveness and safety, which are left undefined under the Food and Drug Act. But why should the act leave its key terms undefined when so much turns on them?
Here the answer has more to do with distribution of political power than with linguistic inquiry. Stated as absolutes, the terms "safe" and "effective" suggest that we work in a dichotomous universe, where the difference between safe and unsafe is as categorical as the difference between driving on the correct side and the wrong side of the road, whether in the United States or the United Kingdom. The same arguments could be made for effectiveness.
Yet, of course, this is pure myth, for the moment the FDA or anyone else has to deal with concrete cases, the task is always to compare the risks and benefits of alternative strategies.
In the Vioxx and Celebrex debate, for example, no one thinks that you can describe, categorically, either or both of these drugs as safe or unsafe. The question is whether all things considered it is better to take one drug if the other has some elevated risk.
Even that determination depends on all sorts of refinements, given that there is never a uniform response to any given drug. Any FDA finding that a drug is safe and effective cannot possibly be read to mean that it has no adverse side effects and that it works a cure in 100 percent of the cases. We are dealing with wonder drugs, but not ones that have supernatural power.
So literalism is not a viable way to interpret the statutory command unless we are prepared to acknowledge at the start that no drug is good enough to make the cut. Any realistic determination, therefore, has to ask the question of how safe and effective any given drug is, not only for one individual but across large populations. In dealing with this question, the FDA cannot take the position that a drug should be banned just because it has adverse effects in a single class of cases. No one would market (even if unbanned) thalidomide for pregnant women in the first trimester.
Thus far the argument has been that bans have limited social utility. But federal law gives the FDA unquestioned power to ban from the marketplace drugs that it judges to be unsafe or ineffective. The question is thus sensibly asked how these general propositions about decisions could be translated into a regulatory context that treats that power as unquestioned.
The blunt answer is that there is no good way to back off a statutory command that is too stringent for its own good. But by the same token, the language in question could be interpreted by the FDA to mean that it should ban products only when their release is likely to do more social harm than good.
Of course, this standard is not met simply by showing that certain subsets of the population are better off without a certain drug than with it. Rather the standard should mean that so long as some significant fraction of the population can benefit on net from the use of the drug, it should continue to be sold. Warnings—perhaps on occasion stringent black-box warnings—could be added to the package, and doubtless an extensive network of information about the drug's proper use would develop in the field, given the size of the stakes in question.
One approach to avoiding delays on the one hand and misallocation of funding on the other is to break the state monopoly over testing. Bowing to the inevitability of some FDA-like oversight, Henry Miller [of the Hoover Institute] has not examined the substantive standards for review that I have stressed, but has instead taken a different tack. He has proposed that the United States move to a system similar to European regulation of medical devices and drugs.
For the most part, devices are overseen there by "notified bodies," nongovernmental entities sanctioned by government; and the review of the equivalent of new drug applications is performed under contract by academics skilled in the various areas. Miller's proposal would convert the FDA from a certifier of products to a certifier of private-sector entities that would perform much of the day-to-day regulation of clinical trials and perform the initial NDA review.
The arrangement resembles the role of Underwriters Laboratories and its competitors in setting standards and certifying tens of thousands of categories of consumer products. In principle, the decentralization that Miller defends would be an improvement over the current situation in the United States. But the big concern is whether the practice can be transplanted from one legal culture to another.
More specifically, this proposal will work in the United States only if the FDA is limited by law to minimal influence over the entities that it certifies to perform oversight. But within our context, the single most powerful explanation for how the FDA works is the bureaucratic imperative that seeks to expand turf no matter what its consequences for others.
My own sense, therefore, is that any proposed system of decentralization could work only if the government removed the oversight from the FDA, with its ingrained habits, and transferred it to a new board, not staffed by FDA veterans, that took a very different view of its overall role in the grand scheme of things.
But in light of the FDA's rearguard efforts to maintain its own power against other initiatives, and the knee-jerk reaction in Congress for imposing stultifying drug regulation, the betting is that the future holds only more of the same. It is amazing the harm that can be done if the elimination of patient choice is regarded as proof of a diligent system of consumer protection.
This excerpt is taken from Overdose: How Excessive Government Regulation Stifles Pharmaceutical Innovation, published in October by Yale University Press. Used by permission. Copyright © 2006 by Richard A. Epstein. All rights reserved.
Richard A. Epstein is the James Parker Hall Distinguished Service Professor of Law and the director of the Law and Economics Program at the University of Chicago, where he has taught since 1972. He is well known for his libertarian approach to law, and for the way he brings economic analysis to discussions of public policy. His recent books include Forbidden Grounds: The Case Against Employment Discrimination Laws (Harvard, 1992); Takings: Private Property and the Power of Eminent Domain (Harvard, 1985); and Modern Products Liability Law (Greenwood Press, 1980).
Supply Chain Strategy: Managing risk and opportunity in a changing global landscape