Bruce Reider (editor of the AJSM) writes so well and has such clear logical thinking that it is hard to believe he is a surgeon! This is a terrific short editorial from the AJSM on the perspective of using RCTs in clinical medicine:
More Random Thoughts
Bruce Reider, MD
"The splendid scientific contributions of randomized trials have made them the ?gold standard? of cause-effect research?but when gold is too expensive or too difficult to obtain, we need to have good substitutes."
?Alvan Feinstein. Current problems and future challenges in randomized clinical trials.
Circulation. 1984;70:767?774.[Free Full Text]
Last month, I pondered why orthopaedic surgeons seem reluctant to perform randomized clinical trials. My thoughts were triggered by an incidental comment at an academic meeting. At another conference this year, I was cornered during the coffee break by a surgical contemporary, a thoughtful clinical researcher who has contributed dozens of valuable articles to the orthopaedic sports medicine literature. He demanded to know why randomized clinical trials are considered more "scientific" than other clinical research designs. I thought that this was a good question, whose answer is not as straightforward as it might seem.
The most direct response to this question is that the randomized clinical trial is indeed more scientific than other research designs because this methodology seeks to isolate a single parameter and demonstrate how it affects a patient?s condition. The randomized clinical trial is intended to eliminate selection bias in the allocation of patients to different treatment groups in prospective clinical studies.6
There is nothing magical about the randomization process. It assumes that random variation will produce groups that are virtually identical in all parameters that might affect the outcome, except for the treatment modalities being compared. Thus, the investigator "plays the odds" that random assignment will produce equivalent study groups. Although careful subject selection and assignment in nonrandomized trials may ensure a balanced distribution of factors known to influence treatment outcome, randomization has the added potential of balancing prognostic factors that are unrecognized or difficult to quantify.
This system is most effective when a large number of subjects are being studied. In smaller trials, as often occur in orthopaedic surgery, it is quite possible that randomization will produce groups that differ in some important parameters, such as gender or activity level. Although specialized randomization techniques can help avoid this problem, it is still important that reports of randomized trials document that the assignment process did indeed produce equal groups.
When we read that a study used randomization to assign patients to treatment groups, we are likely to be reassured that the process was unbiased. If subjects were distributed by some other means, we may have lingering doubts that the assignment was consciously or inadvertently influenced to favor the outcome in one group or another. When historical controls are taken from an earlier time period, there is a risk that factors other than the experimental variable may have improved in the interim. Even when nonrandomized controls are contemporaneous, differences in patient recruitment or self-selection may have hopelessly biased treatment assignment. If randomization seems to have been feasible and was needlessly avoided, we are justifiably skeptical about other aspects of the study design and implementation.1
The randomized clinical trial came to the forefront following the publication of articles that contrasted the results of such trials with the results of observational studies that used historical control groups.3,4,8 One such article, which compared 50 randomized controlled trials with 56 observational studies, found that the therapy being investigated was considered effective in only 20% of the randomized trials but nearly 79% of the observational studies.8 The conclusion was that observational studies with historical controls tend to be inherently biased in favor of new treatments.
Some more recent publications have challenged the primacy of the randomized controlled trial.2,5 When these researchers looked at modern observational studies that used control groups but did not involve randomization, they found that the conclusions almost always agreed with those obtained by randomized controlled trials on the same topics. In an intriguing analysis, Horwitz and colleagues7 showed that restricting a prospective observational cohort according to the eligibility criteria necessary for a randomized study produced results that were similar to a "gold standard" randomized clinical trial, whereas the use of less stringent "expanded" eligibility criteria resulted in a substantially larger, presumably biased, treatment effect. Thus, nonrandomized prospective studies that are planned, conducted, and analyzed with the same methodological thoroughness seem able to avoid most potential biases and achieve results similar to randomized trials.1 Ideally, this should include not only strict selection criteria and equitable assignment methods but also evaluation by observers who are "blinded" to the treatment given.9
The use of randomization does not guarantee a valuable study. Randomized trials that are poorly conceived, are badly executed, or ask a question of marginal clinical importance will not be worthy of our attention. The randomization process may provide subject groups that are initially equivalent, but a study may then be compromised by inequalities in adjunctive treatments or in the evaluation process.
The structure of randomized trials bears its own inherent limitations.5 The strict nature of their inclusion and exclusion criteria leads to homogeneous groups that may not reflect the entire spectrum of patients seen in a clinical practice. Because subjects must consent to participate in such studies, certain classes of patients may be systematically excluded. In addition, the regimentation of the study methodology may produce a "best case scenario" in treatment rigor and patient compliance that exceeds real-world experience.
Because ethics dictate that treatments being compared in a randomized trial should appear to have similar chances of success, randomized trials tend to compare treatment alternatives that have relatively small differences in their results.1 It is therefore ironic that the practical size limitations of these studies may restrict their power to detect subtle but possibly clinically important differences in treatment outcome.8 Large multicenter cohorts may be required to study uncommon sequelae, such as complications and side effects, in a meaningful way.
It is difficult and expensive to follow randomized populations for long periods of time.This limits our ability to use this study method to detect outcome variables that develop slowly, such as osteoarthritis following shoulder stabilization. As time goes on, randomized patients may be lost to follow-up or "contaminate" themselves by seeking other treatments or diverging from behavioral guidelines.
Randomized clinical trials remain the gold standard of clinical research. From a practical standpoint, however, there are just too many treatment options and prognostic factors to expect that they all will be isolated and studied individually in randomized trials.6 It will also be impractical or even unethical to use randomized controlled trials to answer some questions, such as the effect of lifestyle factors on health or the incidence of uncommon side effects of a given treatment. When other designs are used, however, it is important that they incorporate as many of the positive features associated with a randomized trial as possible.
Abel U, Koch A. The role of randomization in clinical studies: myths and beliefs. J Clin Epidemiol. 1999;52:487?497.[CrossRef][ISI][Medline][Order article via Infotrieve]
Benson K, Hartz A, A comparison of observational studies and randomized, controlled trials. N Engl J Med. 2000;242:1878?1886.[CrossRef]
Byar DP, Simon RM, Friedewald WT, et al. Randomized clinical trials: perspectives on some recent ideas. N Engl J Med. 1976;295:74?80.[Abstract]
Chalmers TC, Celano P, Sacks HS, Smith H Jr. Bias in treatment assignment in controlled clinical trials. N Engl J Med. 1983;309:1358?1361.[Abstract]
Concato J, Shah N, Horwitz R. Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med. 2000;342,1887?1892.[Abstract/Free Full Text]
Feinstein AR. Current problems and future challenges in randomized clinical trials. Circulation. 1984;70:767?774.[Free Full Text]
Horwitz RI, Viscoli CM, Clemens JD, Sadock RT. Developing improved observational methods for evaluating therapeutic effectiveness. Am J Med. 1990;89:630?638.[CrossRef][ISI][Medline][Order article via Infotrieve]
Sacks H, Chalmers TC, Smith H Jr. Randomized versus historical controls for clinical trials. Am J Med. 1982;72:233?240.[CrossRef][ISI][Medline][Order article via Infotrieve]
Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias: dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995;273:408?412.[Abstract]