Testing Treatments: Nine Questions For the Authors

From this comment (thanks, Elizabeth Molin) I learned of a British book called Testing Treatments (pdf), whose second edition has just come out. Its goal is to make readers more sophisticated consumers of medical research. To help them distinguish “good” science from “bad” science. Ben Goldacre, the Bad Science columnist, fulsomely praises it (“I genuinely, truly, cannot recommend this awesome book highly enough for its clarity, depth, and humanity”). He wrote a foreword. The main text is by Imogen Evans (medical journalist), Hazel Thornton (writer), Iain Chalmers (medical researcher), and Paul Glaziou (medical researcher, editor of Journal of Evidence-Based Medicine).

To me, as I’ve said, medical research is almost entirely bad. Almost all medical researchers accept two remarkable rules: (a) first, let them get sick and (b) no cheap remedies. These rules severely limit what is studied. In terms of useful progress, the price of these limits has been enormous: near total enfeeblement. For many years the Nobel Prize in Medicine has documented the continuing failure of medical researchers all over the world to make significant progress on all major health problems, including depression, heart disease, obesity, cancer, diabetes, stroke, and so on. It is consistent with their level of understanding that some people associated with medicine would write a book about how to do something (good science) the whole field manifestly can’t do. Testing Treatments isn’t just a fat person writing a book about how to lose weight, it’s the author failing to notice he’s fat.

In case the lesson of the Nobel Prizes isn’t clear, here are some questions for the authors:

1. Why no chapter on prevention research? To fail to discuss prevention, which should be at least half of health care, at length is like writing a book using only half the letters of the alphabet. The authors appear unaware they have done so.

2. Why are practically all common medical treatments expensive?

3. Why should some data be ignored (“clear rules are followed, describing where to look for evidence, what evidence can be included”)? The “systematic reviews” that Goldacre praises here (p. 12) may ignore 95% of available data.

4. The book says: “Patients with life-threatening conditions can be desperate to try anything, including untested ‘treatments’. But it is far better for them to consider enrolling in a suitable clinical trial in which a new treatment is being compared with the current best treatment.” Really? Perhaps an ancient treatment (to authors, untested) would be better. Why are there never clinical trials that compare current treatments (e.g., drugs) to ancient treatments? The ancient treatments, unlike the current ones, have passed the test of time. (The authors appear unaware of this test.) Why is the comparison always one relatively new treatment versus another even newer treatment?

5. Why does all the research you discuss center on reducing symptoms rather than discovering underlying causes? Isn’t the latter vastly more helpful than the former?

6. In a discussion of how to treat arthritis (pp. 170-172), why no mention of omega-3? Many people (with good reason, including this) consider omega-3 anti-inflammatory. Isn’t inflammation a major source of disease?

7. Why is there nothing about how to make your immune system work better? Why is this topic absent from the examples? The immune system is mentioned only once (“Bacterial infections, such as pneumonia, which are associated with the children’s weakened immune system, are a common cause of death [in children with AIDS]“).

8. Care to defend what you say about “ghostwriting” (where med school professors are the stated authors of papers they didn’t write)? You say ghostwriting is when “a professional writer writes text that is officially credited to someone else” (p. 124). Officially credited? Please explain. You also say “ghostwritten material appears in academic publications too – and with potentially worrying consequences” (p. 124). Potentially worrying consequences? You’re not sure?

9. Have you ever discovered a useful treatment? No such discoveries are described in “About the Authors” nor does the main text contain examples. If not, why do you think you know how? If you’re just repeating what others have said, why do you think your teachers are capable of useful discovery? The authors dedicate the book to someone “who encouraged us repeatedly to challenge authority.” Did you ever ask your teachers for evidence that evidence-based medicine is an improvement?

The sad irony of Testing Treatments is that it glorifies evidence-based medicine. According to that line of thinking, doctors should ask for evidence of effectiveness. They should not simply prescribe the conventional treatment. In a meta sense, the authors of Testing Treatments have made exactly the mistake that evidence-based medicine was supposed to fix: Failure to look at evidence. They have failed to see abundant evidence (e.g., the Nobel Prizes) that, better or not, evidence-based medicine is little use.

Above all, the authors of Testing Treatments and the architects of evidence-based medicine have failed to ask: How do new ideas begin? How can we encourage them? Healthy science is more than hypothesis testing; it includes hypothesis generation — and therefore includes methods for doing so. What are those methods? By denigrating and ignoring and telling others to ignore what they call “low-quality evidence” (e.g., case studies), the architects of evidence-based medicine have stifled the growth of new ideas. Ordinary doctors cannot do double-blind clinical trials. Yet they can gather data. They can write case reports. They can do n=1 experiments. They can do n=8 experiments (“case series”). There are millions of ordinary doctors, some very smart and creative (e.g., Jack Kruse). They are potentially a great source of new ideas about how to improve health. By denigrating what ordinary doctors can do (the evidence they can collect) — not to mention what the rest of us can do — and by failing to understand innovation, the architects of evidence-based medicine have made a bad situation (the two rules I mentioned earlier) even worse. They have further reduced the ability of the whole field to innovate, to find practical solutions to common problems.

Evidence-based medicine is religion-like in its emphasis on hierarchy (grades of evidence) and rule-following. In the design of religions, these features made sense (to the designers). You want unquestioning obedience (followers must not question leaders) and you want the focus to be on procedure (rules and rituals) rather than concrete results. Like many religions, evidence-based medicine draws lines (on this side “good”, on that side “bad”) where no lines actually exist. Such line-drawing helps religious leaders because it allows their followers to feel superior to someone (to people outside their religion). When it comes to science, however, these features make things worse. Good ideas can come from anybody, high or low in the hierarchy, on either side of any line. And every scientist comes to realize, if they didn’t already know, that you can’t do good science simply by following rules. It is harder than that. You have to pay close attention to what happens and be flexible. Evidence-based medicine is the opposite of flexible. “ There is considerable intellectual tyranny in the name of science,” said Richard Feynman.

Testing Treatments has plenty of stories. Here I agree with the authors — good stories. It’s the rest of the book that shows their misunderstanding. I would replace the book’s many pages of advice and sermonizing with a few simple words: Ask your doctor for the evidence behind their treatment recommendation. He or she may not want to tell you. Insist. Don’t settle for vague banalities (“It’s good to catch these things early”). Don’t worry about being “difficult”. You won’t find this advice anywhere in Testing Treatments. If I wanted to help patients, I would find out what happens when it is followed.

More Two of the authors respond in the comments. And I comment on their response.

14 thoughts on “Testing Treatments: Nine Questions For the Authors

  1. Evidence-based medicine has a good name. Who doesn’t want their medical treatments to be based on evidence? In practice, however, it means medicine deemed by a certain group of experts to be supported by evidence. It’s an attempt to create a knowledge monopoly, in the same way that the IPCC attempts to act as a monopoly on climate change knowledge. All monopolies suppress innovation and are often captured or created by special interests — government, academia and corporations. I would suggest knowledge monopolies are often even more damaging than production monopolies, because they deeply affect the entire web of knowledge that they are integrated in to with wide-ranging consequences.
  2. Seth, excellent review and commentary on this topic. I would offer the additional thoughts that, among the things neglected or overlooked by conventional medical research (whether “evidenced-based” or not) is anything having to do with diet/nutrition. I give you mucho credit for addressing these matters on your blog.
    Along these lines, if more physicians (and our fellow citizens) would but remove their heads from their butts on such issues, I dare say there could be a dramatically positive effect on public health. More focus on nutritional issues would go a long way in advancing prevention for many common diseases and (the way I prefer to look at it), would help people become proactive on their own in avoidance of same. Of course we would also have to change the incentives in a medical care system that is so driven by profit and conflicts of interest.
    The other reality that would have to be overcome within the medical industry, to foster much less costly prevention (or avoidance) of disease, is the current system that allows issues of diet/nutrition to default to government agencies that have been captured by those whom they are supposed to regulate (the USDA comes to mind) or trade groups such as the American Diabetes Association that get their financial support from pharma and packaged food companies. Talk about conflict of interest!
    At the risk of sounding like a pitchman (which I‘m not), I would also like to suggest we at least become aware of the work of some doctors (still largely in the minority) who are already practicing (real) preventive medicine. In this case, I’m specifically referring to William Davis, a cardiologist in Milwaukee. His blog is available to anyone with access to the Internet, but he has now also published a book, “Wheat Belly,” that contains actual case studies from his medical practice, as well as end notes citing the supporting, scientific sources. Anyone interested in getting a better idea of what it’s about can simply go on Amazon.com and check out the reviews. I might add that the book has been on the NY Times best seller list, within its category, off and on for several weeks, and can probably now be found in many public libraries.
  3. Ironically and tragically, I heard about the solution when I was a kid. I remember it being about traditional Chinese doctors, though I never verified the story.
    Pay a doc when you’re healthy. When you get sick, stop paying until you get better.
    You get what you pay for. If you pay for treatment, you get treatment. If you pay for health, you get health.
  4. Seth,
    Your point is so important, and I think it’s worth doing whatever it takes to call attention to how the entire medical establishment is designed to reward the lowest ROI research.
    How about creating an annual award to draw more news coverage?
    Why don’t you create theRoberts Prize, for each year’shighest ROIhealth advance?
  5. Lemniscate, I like your comment very much. I think the termknowledge monopoly, which I haven’t heard before, brilliantly describes the problem. I especially like these parts of your comment:
    Evidence-based medicine has a good name. Who doesn’t want their medical treatments to be based on evidence? . . .It’s an attempt to create a knowledge monopoly, . . . All monopolies suppress innovation and are often captured or created by special interests
    You have put the problem very well and I hope that someday people who support evidence-based medicine will address it.
  6. I got the term from Richar Tol’s paper “Regulating Knowledge Monopolies: The Case of the IPCC.” via Judy Curry. I don’t think his solution to regulate knowledge monopolies is that good; the regulator is just as likely to be captured or created by special interests. However, the idea that (effective) knowledge monopolies have bad effects is important. The idea of one super-authoritative technocratic body seems to have some intuitive appeal to many, including “evidence-based medicine” advocates.
  7. Elisabeth, thanks for the link. The post is titled “What should we ask the doctor?” If you ever ask your doctor for evidence, I hope you will write about what happens. I will post it in my blog if you send it to me or link to it if you post it in your blog.
  8. My husband is the one with the (slightly) elevated cholesterol. I hated that our doctor put him on statins, so I did a lot of research (much of it with the help of your blog), and gave him a bunch of links to relevant studies and information to present to her.
    Her response: “I didn’t know that. Thank you. You probably shouldn’t be on statins.” So now he isn’t.
    Needless to say, I love our doctor!
  9. Elisabeth, thanks for explaining that. To say you love your doctor because she changed her mind when told about evidence she should have known (statins are dangerous and expensive) is quite a comment on doctor behavior.
    What if your husband had asked your doctor “what’s the evidence behind your recommendation?” Do you have any idea what would have happened?
  10. Thank you for responding to my questions. That is generous of you. I will discuss your answers at length later but let me make some simple points right now.
    1. “We don’t suggest ignoring data,” you write. Yet you praised “systematic reviews,” which do exactly that: exclude large fractions of relevant research. My opinion of medical research is irrelevant to the question of whether you advocate ignoring data.
    2. “Our book was about testing treatments, not inventing treatments,” Inventing treatments includes testing treatments. The big point of my comments on your book was that what you advocate suppresses innovation. It does so because it denigrates (and, in systematic reviews, ignores) the cheap small-scale studies (i.e., tests) important in the beginning of an idea.
    3. “However a search for trials and systematic reviews of omega-3 for osteoarthritis suggests we have not overlooked convincing evidence.” Apparently (your use of “convincing”) you have decided that some evidence doesn’t count. Yet omega-3 is, compared to other treatment options you include, very cheap, very easy to get and take, and very safe. I suspect those considerations (price, availability, ease of treatment, safety) did not enter into your judgement of the evidence. You seem to have simply looked at how convincing the studies were. (This is what evidence-based medicine preaches over and over.) Perhaps if you considered those factors, you would value the studies you now dismiss more highly. You should take those considerations into account. I think I speak for every person in the world outside medicine and health care when I say that.
    4. One of my questions was this: “Did you ever ask your teachers for evidence that evidence-based medicine is an improvement?” You seem to have ignored it. The research you praise has its place. The research you denigrate also has its place — at least, I think so. That is why I ask: What is the evidence that evidence-based medicine has made things better?
    5. The large point of my questions, to repeat myself, was that the preferences and values shown in your book suppress innovation because they denigrate the work needed in the beginning. You denigrate the cheap obviously-imperfect tests needed to find good ideas worth testing in more expensive ways. You have not addressed this point.
  11. I think the difference in attitude between Seth and the authors can be partially explained by differing emphases on type I and type II errors. “Evidence-based” medicine puts far more emphasis on type I errors at the expense of type II errors. Not caring about type II errors is bad for innovation and expense. There may be lots of safe and cheap potential treatments one could try. If you’re worried about making a type I error then you might not try a treatment that actually works, committing a type II error. Which type of error you should try and avoid depends on the treatment being considered — cost, potential risks, etc. — and trying to fit all assessments of treatment efficacy in to one methodology is a mistake.

Leave a Reply

Your email address will not be published. Required fields are marked *