The Trouble With RCTs

In an email to a friend, I compared the obsession of med school professors with methodological purity (e.g., efficacy must be demonstrated with an RCT, randomized controlled trial) to religious ritual. More concern with appearances (ritual), I said, is linked to less understanding of substance. My friend replied:

I am actually a believer in this particular religion (The Cult of RCT)! Seriously: I think the medical world is quite right to put a huge premium on RCTs, because RCTs so often prove that things they are doing don’t work. While sometimes the RCT may provide a negative verdict on something that does work, this seems to me an unusual case, and generally avoidable if one considers statistical power, possible subgroup responses, etc and avoids overgeneralizing the conclusions.

I replied:

Are RCTs better than what prevailed before? Probably. But I would say the same about religion, which has its benefits.

I think the medical world has turned off a large fraction of its brain via insistence on RCTs and failure to understand their weaknesses and the strengths of alternatives. It isn’t just that “RCTs may return a negative verdict on something that works,” it’s also that such a requirement for very expensive research suppresses innovation — testing things via cheaper ways. Atul Gawande wrote about how obstetricians made a lot of progress by ignoring this requirement:

https://www.newyorker.com/archive/2006/10/09/061009fa_fact

Other areas of medicine, which followed the RCT requirement, made less progress during the same period, it can be argued.

Let’s say I told you that the only way you can travel to work is via an armed escort — you would be appalled, even though it’s true you would be safer. An insistence on RCTs is overreaction. Given the lack of innovation in medicine/health care, for which I believe they (or at least the lack of understanding they embody) are partly responsible, very expensive overreaction.

The best way to learn is to do. The best way to learn about health is to do as many experiments as possible. Not slow, expensive RCTs. Not slow, expensive surveys, which don’t involving “doing” to the extent that an experiment does. This is a big reason my self-experiments taught me a lot — because I could do so many of them.

11 thoughts on “The Trouble With RCTs

  1. Why not do a randomized experiment to test the value of non-randomized methods? For example, randomly assign patients to docs who either just do what randomized experiments advise, or to docs who use their judgment based on other inputs. See which group does better.

  2. Robin, yes, that would make the point. Just to be clear, the comparison I’m talking about is:

    group A: randomized trials evidence only. All other evidence is ignored.

    group B: all available evidence. Randomized trial evidence plus everything else.

    I’m saying the purists ignore valuable information (in addition to suppressing innovation).

  3. Seth,
    The focus on RCT’s is to “know” things for sure. Anecdotes, self-experimentation, cohort studies are all good for generating hypothesis. But, you wouldn’t want to say “x causes y/ x prevents y” without an RCT, or at least a lot more evidence than a few people doing self experimentation.

    You may be right that self experimentation isn’t done enough, but there also needs to be a second layer of double checking. We can’t have doctors treating people based on anecdotes. People think all sorts of crazy things about health. Like, their friend’s cancer was cured by copper bracelets, or magnets, or gold, or a million other things. Sometimes a correlation isn’t causation, sometimes it’s a placebo effect, etc..

  4. @Jeff, “We can’t have doctors treating people based on anecdotes.”

    People conclude lots of things (correctly, in a warranted way, and tentatively) based on “anecdotes”. If we called them “case studies” would that make it better?

    Better questions – and more difficult ones – than a blanket statement about anecdotes or case studies: what kind of evidence is there in the case study? How strong is the evidence? What problems with it? What can we conclude with warrant based on this evidence? How can we fruitfully test is further? And so on.

    If one can conduct an RCT, then sure, go ahead. How many doctors can just whip off one of these? Instead, they should prescribe based on the evidence base available, which should include careful evaluation of case studies and even anecdotes. To ignore them would be … willful ignorance.

  5. Jeff, RCTs have their place, just as armed escorts have their place. At the moment they have too much place. They loom too large in the thinking of health professionals. It’s not so much that RCTs are bad or worthless, I’m not saying that at all. As you say, it’s often good to do a better study. Sometimes that better study is an RCT. The huge mistake is ignoring non-RCT evidence. Claiming it isn’t “real” or something.

  6. Randomized controlled trial = often self-serving research funded by a big corporation, with the results screened by the corporation, but accepted blindly by many health professionals.

    Reports from patients about adverse effects = too often ignored as “anecdotal.”

    Jim

  7. Yes, big food companies love regulations. Because they can afford their cost and small new companies cannot. The regulations suppress innovation and therefore suppress competition. This is exactly what the emphasis on RCTs does — suppresses competition to Big Pharma.

  8. Big Pharma certainly has a way of “influencing” the results of RCTs. If the science is incompatible with the business plan, well… so much the worse for science.

    See this excellent article about how Bristol-Myers Squibb managed to obtain FDA approval for Serzone (an antidepressant drug):

    https://web.archive.org/web/20060212173429/https://www.washingtonian.com/health/hardtoswallow.html

    The relevant part of the article starts around the middle of the second page.

  9. https://robertpaulwolff.blogspot.com/2010/04/memoirs-volume-two-fifth-installment.html

    Even more fascinating was Bakan’s study of the roots of Behaviorism in American Psychology. He discovered that the men who developed and shaped the Behaviorist school had all come from Protestant families in small mid-Western towns and had then moved to big cities [typically, Chicago], where the culture shock of the extremely heterogeneous population mix drove them to maintain some sort of control over their shattered moral framework by seizing on Behaviorism. Bakan did a careful analysis of the experimental reports published in the leading Behaviorist journals, and also of the papers that were turned down on the grounds that the authors had not done enough experiments to make their results statistically significant. He reanalyzed the data to show that the editors and reviewers routinely overestimated the number of experiments required for statistical significance, in effect treating experiments as a place holder for Protestant good works.

  10. I think everybody can agree that small scale experimentation is good, as are large scale RCTs. And that putting emphasis on the rituals of method at the cost of genuine insight is a bad idea. And that it’s all about opportunity cost as usual. So what’s the problem?

    I think it’s that the ritual has been elevated into law, and that stops small scale experimentation from proceeding at its equilibrium, laissez-faire pace. If I’m right, then what we’re talking about here is not an epistemological question at all, but a political one. Those then turn on quite different considerations from what has been said so far, e.g. on the right to consent to experimental treatment, medical ethics and the like.

Leave a Reply

Your email address will not be published. Required fields are marked *