That's because the Bayesian approach was developed specifically to deal with new data as they come in, and to update the probabilities
under investigation. Instead of determining the likelihood that a drug's efficacy could have happened by chance (which is
what the frequentist approach does), a Bayesian trial will give a probability that the drug was effective. The usual question
at this point is: Effective compared to what? And, the answer is: To the probability you calculated, before the data from
the current trial, of the drug being effective. This comparison of (initial) prior probabilities and (updated) posterior probabilities
is either one of the great strengths of Bayesian statistics or one of its greatest flaws, depending on which statistician
you ask. For a pro-Bayesian viewpoint, consider the designs being explored at the MD Anderson Cancer Center in Houston.
A Bayesian alternative to the 3+3 MTD-finding design is the continual reassessment method (CRM). Before the trial begins,
researchers develop a model of toxicity in relation to dose. Patients are initially assigned doses based on this model, but
as the data come in, the probabilities are recalculated, and later doses close in on an MTD value more effectively than the
3+3 design. As with all Bayesian methods, it is crucial that the model—the prior hypothesis, in Bayesian jargon—be as well
informed as possible. Some conservative hybrid designs start with a short 3+3-style dosing round to constrain the dose/toxicity
curve before switching to the full CRM technique. There are many other CRM variations, which address issues such as selection
of starting dosages, the size of the dose escalation steps, and the number of patients per dose.
Bayesian designs can provide for a transition from merely sequential to continuous monitoring of trial data, but they can
also allow for a wide range of other parameters to be changed. Designs can be developed that can, on the fly, vary the number
of patients needed, eligibility for joining the trial, how patients are to be divided between arms of the study, and what
doses of the investigational drug they'll receive.
There are several ways to implement these response-adaptive patient randomizations. One well-known technique is "Random Play-the-Winner,"
one of the "urn" methods—so called because they can be modeled after different ways of pulling variously colored balls from
an urn. Play-the-Winner mathematically weights the treatment arms that have produced the fewest adverse events and/or the
most positive data so that more patients are assigned to them. Both the degree of positive or negative response in a given
group and the number of patients already assigned to it have to be taken into account to ensure that all possibilities are
being considered, while at the same time, responding to real differences in outcomes. A similar "Drop-the-Loser" rule can
also be used, which means that in practice, entire dosage groups or efficacy arms can be added or dropped as the data develop.
It is obvious why this sort of adaptive randomization would be useful to a pharmaceutical company. But in the real world,
it may take a long time to see a meaningful response in patients—too long for the data to effectively influence the course
of the trial. A number of the more theoretical treatments of adaptive trial methods seem to have glossed over this difficulty,
but as such techniques have moved into real practice, the situation has improved, with several statistical techniques proposed
to deal with data from partial courses of treatment. Still, there's no doubt that adaptive designs are simplified considerably
if the expected clinical readout is rapid—something that should happen more often as biomarkers take on a larger role in clinical
A Bayesian approach has the potential to make big changes in the way R&D is conducted. Even the basic Phase I/II/III divisions
that the industry works with could easily blur together. Early dose-finding studies could settle on an optimum level, with
more patients being added to that arm of the study while other dosage groups drop away. Any efficacy data that could be collected
to that point wouldn't be lost, but would bolster the statistics of the trial as it shifted to Phase II concerns.