From this comment (thanks, Elizabeth Molin) I learned of a British book called Testing Treatments (pdf), whose second edition has just come out. Its goal is to make readers more sophisticated consumers of medical research. To help them distinguish “good” science from “bad” science. Ben Goldacre, the Bad Science columnist, fulsomely praises it (“I genuinely, truly, cannot recommend this awesome book highly enough for its clarity, depth, and humanity”). He wrote a foreword. The main text is by Imogen Evans (medical journalist), Hazel Thornton (writer), Iain Chalmers (medical researcher), and Paul Glaziou (medical researcher, editor of Journal of Evidence-Based Medicine).
To me, as I’ve said, medical research is almost entirely bad. Almost all medical researchers accept two remarkable rules: (a) first, let them get sick and (b) no cheap remedies. These rules severely limit what is studied. In terms of useful progress, the price of these limits has been enormous: near total enfeeblement. For many years the Nobel Prize in Medicine has documented the continuing failure of medical researchers all over the world to make significant progress on all major health problems, including depression, heart disease, obesity, cancer, diabetes, stroke, and so on. It is consistent with their level of understanding that some people associated with medicine would write a book about how to do something (good science) the whole field manifestly can’t do. Testing Treatments isn’t just a fat person writing a book about how to lose weight, it’s the author failing to notice he’s fat.
In case the lesson of the Nobel Prizes isn’t clear, here are some questions for the authors:
1. Why no chapter on prevention research? To fail to discuss prevention, which should be at least half of health care, at length is like writing a book using only half the letters of the alphabet. The authors appear unaware they have done so.
2. Why are practically all common medical treatments expensive?
3. Why should some data be ignored (“clear rules are followed, describing where to look for evidence, what evidence can be included”)? The “systematic reviews” that Goldacre praises here (p. 12) may ignore 95% of available data.
4. The book says: “Patients with life-threatening conditions can be desperate to try anything, including untested ‘treatments’. But it is far better for them to consider enrolling in a suitable clinical trial in which a new treatment is being compared with the current best treatment.” Really? Perhaps an ancient treatment (to authors, untested) would be better. Why are there never clinical trials that compare current treatments (e.g., drugs) to ancient treatments? The ancient treatments, unlike the current ones, have passed the test of time. (The authors appear unaware of this test.) Why is the comparison always one relatively new treatment versus another even newer treatment?
5. Why does all the research you discuss center on reducing symptoms rather than discovering underlying causes? Isn’t the latter vastly more helpful than the former?
6. In a discussion of how to treat arthritis (pp. 170-172), why no mention of omega-3? Many people (with good reason, including this) consider omega-3 anti-inflammatory. Isn’t inflammation a major source of disease?
7. Why is there nothing about how to make your immune system work better? Why is this topic absent from the examples? The immune system is mentioned only once (“Bacterial infections, such as pneumonia, which are associated with the children’s weakened immune system, are a common cause of death [in children with AIDS]“).
8. Care to defend what you say about “ghostwriting” (where med school professors are the stated authors of papers they didn’t write)? You say ghostwriting is when “a professional writer writes text that is officially credited to someone else” (p. 124). Officially credited? Please explain. You also say “ghostwritten material appears in academic publications too – and with potentially worrying consequences” (p. 124). Potentially worrying consequences? You’re not sure?
9. Have you ever discovered a useful treatment? No such discoveries are described in “About the Authors” nor does the main text contain examples. If not, why do you think you know how? If you’re just repeating what others have said, why do you think your teachers are capable of useful discovery? The authors dedicate the book to someone “who encouraged us repeatedly to challenge authority.” Did you ever ask your teachers for evidence that evidence-based medicine is an improvement?
The sad irony of Testing Treatments is that it glorifies evidence-based medicine. According to that line of thinking, doctors should ask for evidence of effectiveness. They should not simply prescribe the conventional treatment. In a meta sense, the authors of Testing Treatments have made exactly the mistake that evidence-based medicine was supposed to fix: Failure to look at evidence. They have failed to see abundant evidence (e.g., the Nobel Prizes) that, better or not, evidence-based medicine is little use.
Above all, the authors of Testing Treatments and the architects of evidence-based medicine have failed to ask: How do new ideas begin? How can we encourage them? Healthy science is more than hypothesis testing; it includes hypothesis generation — and therefore includes methods for doing so. What are those methods? By denigrating and ignoring and telling others to ignore what they call “low-quality evidence” (e.g., case studies), the architects of evidence-based medicine have stifled the growth of new ideas. Ordinary doctors cannot do double-blind clinical trials. Yet they can gather data. They can write case reports. They can do n=1 experiments. They can do n=8 experiments (“case series”). There are millions of ordinary doctors, some very smart and creative (e.g., Jack Kruse). They are potentially a great source of new ideas about how to improve health. By denigrating what ordinary doctors can do (the evidence they can collect) — not to mention what the rest of us can do — and by failing to understand innovation, the architects of evidence-based medicine have made a bad situation (the two rules I mentioned earlier) even worse. They have further reduced the ability of the whole field to innovate, to find practical solutions to common problems.
Evidence-based medicine is religion-like in its emphasis on hierarchy (grades of evidence) and rule-following. In the design of religions, these features made sense (to the designers). You want unquestioning obedience (followers must not question leaders) and you want the focus to be on procedure (rules and rituals) rather than concrete results. Like many religions, evidence-based medicine draws lines (on this side “good”, on that side “bad”) where no lines actually exist. Such line-drawing helps religious leaders because it allows their followers to feel superior to someone (to people outside their religion). When it comes to science, however, these features make things worse. Good ideas can come from anybody, high or low in the hierarchy, on either side of any line. And every scientist comes to realize, if they didn’t already know, that you can’t do good science simply by following rules. It is harder than that. You have to pay close attention to what happens and be flexible. Evidence-based medicine is the opposite of flexible. “ There is considerable intellectual tyranny in the name of science,” said Richard Feynman.
Testing Treatments has plenty of stories. Here I agree with the authors — good stories. It’s the rest of the book that shows their misunderstanding. I would replace the book’s many pages of advice and sermonizing with a few simple words: Ask your doctor for the evidence behind their treatment recommendation. He or she may not want to tell you. Insist. Don’t settle for vague banalities (“It’s good to catch these things early”). Don’t worry about being “difficult”. You won’t find this advice anywhere in Testing Treatments. If I wanted to help patients, I would find out what happens when it is followed.
More Two of the authors respond in the comments. And I comment on their response.
14 thoughts on “Testing Treatments: Nine Questions For the Authors”