A changing paradigm for medical research: the evolution of the clinical trial

Kayvon Modjarrad
Dale and Betty Bumpers Vaccine Research Center, National Institute of Allergy and Infectious Diseases, Bethesda, Maryland, United States (Fall 2013)

This history of science follows a convoluted path of imperceptible intellectual drifts and sudden philosophical shifts. Scientific milestones are, therefore, the result of gradually building thought processes. This is as true for advances in the methods of scientific inquiry as it is for the content of scientific discovery. Tracing the emergence of these methods, the clinical trial may be an ideal paradigm for studying the process of scientific evolution. At the current pace of biomedical research, the seeds of scientific thought germinate and grow into practical therapies with relatively high speed. The history of clinical science recapitulates this process of therapeutic development, but over more time and resistance. The clinical trial is now the most dominant force in clinical research, but it reached this position through a slow ascent over several millennia.

The year the randomized clinical trial was born is often cited as 1948.1 That claim, however justified, is somewhat oversimplified. At the close of World War II, antibiotics had emerged as one of the most powerful weapons in a still limited armamentarium against infectious diseases. One antibiotic, streptomycin, was of particular interest for its potential use against society’s historical scourge: tuberculosis. In 1948 the United Kingdom’s Medical Research Council decided to test the compound on its military personnel. The findings of that study were undeniably important, but the method by which those results were determined was perhaps even more significant. This study marked a new era of the modern clinical trial. The journey to 1948, however, was a long one marked by alternating periods of stagnation and success.

Stripped of its modern-day sophistication, the clinical trial is essentially a fair test: a comparison of interventions. When viewed from a minimalistic perspective, some version of the controlled clinical trial has been in existence for thousands of years. One of the first recorded accounts of a comparative test comes from the Book of Daniel of the Old Testament.2 In the first chapter, Daniel refuses to consume the king’s food, instead requesting his men be given a diet of water and vegetables while others feast on royal rations. After ten days he challenges the king to judge the health of both groups based on their appearance. Daniel’s men are judged to be fitter.

Sacred texts are rife with stories of comparative interventions, but often overlooked are references to the central tenet to any clinical trial: randomization. From the partitioning of stalks in the I Ching to the division of stones in the Bible, chance has been used for the purpose of divination in many cultures for thousands of years.3But many years passed before physicians and scientists appropriated random allotment for the purpose of experimental investigation. Not until the Renaissance did the concept reemerge. In the 14th century the poet, Petrarch, related the words of an anonymous physician to his contemporary, Boccaccio:

I solemnly affirm and believe, if a hundred or a thousand men of the same age, same temperament and habits, together with the same surroundings, were attacked at the same time by the same disease, that if the one half followed the prescriptions of the doctors of the variety of those practicing at the present day, and that the other half took no medicine but relied on Nature’s instincts, I have no doubt as to which half would escape.4

In an expression of doubt about the merits of his day’s medical practice, the physician describes something resembling a placebo-controlled clinical trial. Three hundred years after Petrarch, the Flemish physician, Jean Baptiste Van Helmont, questioned the effectiveness of treating fevers by bloodletting. He decided to challenge the medical orthodoxy with a small study in which he divided a group of patients into two treatment arms: bloodletting and bed rest. He made the conceptual leap toward an idea approximating randomization by casting lots to decide who would receive each treatment.5 He found that the group who were rested but not exsanguinated fared better. He published his findings and was summarily ignored.

One hundred years after Van Helmont’s proposition, the primordial semblance of the clinical trial reemerged in 1847 in the writings of James Lind. A medical officer for the British Royal Navy, Lind tested the efficacy of various treatments for scurvy.6 In his treatise on the subject, he described his attempt to arrange his treatment groups so that they were as similar as possible in all attributes other than the intervention. In modern terms, he made an attempt to remove baseline confounders—a concept now integral to any observational or experimental study.

Subsequent to Lind’s treatise, the number of comparative studies increased manifold. The pressures of cultural expansion and industrial development were beginning to exert an influence on the direction of science and medicine. Scientific reasoning and methodology were being forced to meet the pace of societal change. Although small steps in the evolution of the clinical trial were being made in the years after Lind, it is not until a century later that experimental design played a prominent role in medical evaluation. In 1865 for example, Claude Bernard published the Introduction to the Study of Experimental Medicine, where he challenged the medical profession to improve their standards of care by basing them on scientific principles.

Comparative experiments showed, in fact, that treatment of pneumonia by bleeding, which was believed most efficacious, is a mere therapeutic illusion. … To learn we must necessarily reason about what we have observed, compare the facts, and judge them by other facts used as controls.7

Although many scientists helped conceive components of the evolving clinical trial, Johannes Fibiger was arguably the first to synthesize those parts. In 1898, Fibiger, a Danish physician who later won the Nobel Prize for other work, was investigating the effect of serum treatments for diphtheria.8 He assigned treatment groups via alternate allocation. Instead of choosing every other patient for the treatment group, however, Fibiger’s system of treatment allocation depended on the patient’s admittance to the hospital. Departing from prior practice, this scheme diminished the influence of the investigator’s discriminatory biases.9 Fibiger’s method of allocation was still subject to bias, but with great prescience, he admitted to the subjectivity of his judgment that factored into his selection process:

That this played not a trivial role can hardly be doubted. … In many cases a trustworthy verdict can only be reached when a large number of randomly selected patients are treated with the new remedy and, at the same time, an equally large number of random selected patients are treated as usual.8

At the dawn of the 20th century, medicine was undergoing a radical change. Anecdotal evidence was giving way to scientific principles as the new foundation for medical practice. Consequently, medical research gravitated toward experimental designs that were based, more and more, on the rapidly evolving discipline of statistics. In 1923, R. A. Fisher introduced a statistical theory of randomization within the context of agricultural research.11 A decade and a half later, Austin Bradford Hill published a series of articles in The Lancet that extrapolated on Fisher’s ideas, applying them to the broader subject of validity. He argued that randomization could protect a study against bias and achieve similarity between different groups. Although Hill envisioned the use of random allotment in medical research, he did not advocate for it immediately. He believed that physicians were not yet prepared to make the intellectual shift to a statistical basis for medical decision-making.12,13 In 1947, however, he saw an opportunity to test his ideas and present them to the medical community.

When streptomycin was isolated by Selman Waxman’s laboratory at Rutgers University in 1943, no one could foresee its enormous implications. By 1946, US production and distribution of the compound were at full capacity; but almost all of it was reserved for domestic or military use. The rest of the world would have to wait. That year, though, the British government was able to procure a small amount of the drug.11,14 That limited quantity is perhaps one of the most important details in the history of clinical trials. Because of the shortage, officials and investigators faced a dilemma: to whom would they ration the limited supply of streptomycin? The investigators soon realized their decision could be made easier through the equalizing power of randomization. Conceived for the purpose of improved methodology, randomization was implemented by the pressure to fairly distribute a limited drug.

The superiority of randomization over alternate allocation was not immediately apparent to the scientific community. The conventional method could avoid bias if strictly executed. The difference lay in the potential for bias. In the trial of 1948, Hill devised a system of random number assignments that were sealed in envelopes and concealed from the investigators of the study.15,16 Although, randomization eventually eclipsed alternation it was not until the Salk vaccine trial of 1954 that the new method was widely accepted by the scientific community at large.14 Since that time, randomized clinical trials have been adapted to all areas of medicine and become the gold standard of evaluating novel therapies.

 

References

  1. Doll, R. Controlled trials: the 1948 watershed. BMJ 317:1217-1220, 1998
  2. New Revised Standard Version Bible. Thomas Nelson Inc. Nashville, 1989.
  3. Silverman WA, Chalmers I. Casting and drawing lots. In:  Chalmers I, Milne I, Trohler U (eds). Controlled Trials from History.
  4. Lilienfield AM.  Ceteris Paribus: The evolution of the clinical trial. Bulletin of the History of Medicine  56:1-18, 1982.
  5. Chalmers I. Control of selections biases: comparing like with like In: Chalmers I, Milne I, Trohler U (eds). Controlled Trials from History.
  6. Meldrum, Marcia L. A brief history of the randomized controlled trial: from oranges and lemons to the gold standard. Hematology/Oncology Clinics of North America. 14(4):745-760, 2000.
  7. Cox DCT. Histories of Controlled Trials. In: Chalmers I, Milne I, Trohler U (eds). Controlled Trials from History.
  8. Fibiger JU. On serum treatment of diptheria Hospitalstidende 6(12) 1898; (translated and published in BMJ;317:1998).
  9. Hrobjartsson A, Gotzsche PC, Gluud C. The controlled clinical trial turns 100 years: Fibiger’s trial of serum treatment of diphtheria. BMJ; 317:1243-45 1998.
  10. Cassedy JH. Medicine in America: A Short History. The Johns Hopkins University Press, Baltimore, 1991.
  11. Yoshioka A. Use of randomization in the Medical Research Council’s clinical trial of streptomycin in pulmonary tuberculosis in the 1940’s. BMJ 317:1220-23, 1998.
  12. Hill, AB  Memories of the British Streptomycin Trial in Tuberculosis: The First Randomized Clinical Trial. Controlled Clinical Trials 11:77-79, 1990.
  13. D’Arcy Hart P. A change in scientific approach: from alternation to randomized allocation in clinical trials in the 1940’s. BMJ  319:572-73. 1999.
  14. Marks HM. Notes from the Underground: The Social Organization of Therapeutic Research. Grand Rounds in Medicine.
  15. Hill AB.  The Clinical Trial. The New England Journal of Medicine.  247(4): 113-119, 1952.
  16. Randal J. How randomized clinical trials came into their own. Journal of the National Cancer Institute. 90(17):1257-8, 1998.

 


 

KAYVON MODJARRAD, MD, PhD is a Research Fellow at the National Institutes of Health Vaccine Research Center, where he studies the humoral immune responses to respiratory viruses. He completed his clinical training in Internal Medicine and Infectious Diseases at Yale-New Haven Hospital and the Vanderbilt University Medical Center and earned an MD and PhD through the federally-funded dual degree Medical Scientist Training Program at the University of Alabama at Birmingham. He conducted his dissertation research for his PhD in Epidemiology in Lusaka, Zambia, where he studied the interaction of HIV and parasitic co-infections. Prior to his medical and graduate studies he completed an undergraduate degree at Duke University.

 

Highlighted in Frontispiece Fall 2013 – Volume 5, Issue 4

Hektorama  | Infectious Disease