How Accurate is Epidemiology? (part 2)

Because Gary Taubes is probably the country’s best health journalist, his article in today’s NY Times Magazine (”Do We Really Know What Makes Us Healthy?”) about the perils of epidemiology especially interested me. It’s the best article on the subject I’ve read. He does a good job explaining what’s called the healthy-user bias — people who take Medicine X tend to make other healthy choices as well. Does wine reduce heart attacks? Well, probably — but people who drink more wine also eat more fruits and vegetables.

The article falls short in two big ways. Taubes does a terrible job presenting the case for epidemiology. He mentions the discovery that smoking causes lung cancer but then disparages it by quoting someone calling it “turkey shoot” epidemiology. Actually, that discovery did more for public health than any clinical trial or laboratory experiment I can think of. Taubes fails to mention the discovery that too-little folate in a pregnant woman’s diet causes neural-tube and other birth defects. As the dean of a school of public health put it in a talk, that one discovery justified all the money ever spent on schools of public health (where epidemiology is taught). Taubes also fails to mention that some sorts of epidemiology are much less error-prone than the studies he talks about. For example, a county-by-county study of cancer rates in the United States showed a big change across a geological fault line. People on one side of the line were eating more selenium than people on the other side. Experiments have left no doubt that too-little selenium in your diet causes cancer.

Even worse, Taubes shows no understanding of the big picture. Above all, epidemiology is a way to generate new ideas. Clinical trials are a way to test new ideas. To complain that epidemiology has led to many ideas that turned out to be wrong — or to write a long article about it — is like complaining that you can’t take a bike on the highway. That’s not what bikes are for. If only 10% of the ideas generated by epidemiology turn out to be correct, well, 10% is more than zero. Taubes should have asked everyone he interviewed “Is there a better way to generate new ideas?” Judging from his article, he asked no one.

Now excuse me to take a selenium pill . . .

6 thoughts on “How Accurate is Epidemiology? (part 2)

  1. If only 10% of the ideas generated by epidemiology turn out to be correct, well, 10% is more than zero.
    __________

    Taubes’ points, and I think they’re good ones, are that the hypotheses yielded by epidemiology are confused with facts, and that these questionable hypotheses are immediately implemented as social policy due to a critical storm of:

    1. researchers trying to build names and careers

    2. climbing reporters, bloggers and a sensationalist press trying to make noise, money and fill space. (USA Today will not be running any articles headlined “Slight correlation of questionable causality found in tiny subset.”

    3. opportunistic big Pharma & other entrepreneurs large and small who see trends to milk, and

    4. politicians eager to prove their “protecting” Americans from the latest evil.

    5. an uneducated population which thinks correlation is causality.

  2. The facts (observations) collected by epidemiologists suggest hypotheses; and sometimes those hypotheses are wrong. You may think it is awful that those hypotheses are told to the public; I don’t. Let’s say that an epidemiological study finds a correlation between Behavior X and better health. USA Today publishes this. As a result, many people start doing Behavior X. I don’t see the problem. Sure, they could be wasting their time. But maybe not. Everything has risks, is uncertain. The epidemiological evidence does raise — or should raise — one’s belief that Behavior X causes better health. A little knowledge — a little push in the direction of certainty — is better than nothing.

    I’m more worried about poorly-educated science journalists who are overly critical and poorly-educated scientists who are dismissive (e.g., they fail to grasp that correlation raises the plausiblity of causation–I have blogged about this) than “an uneducated population.” It’s the journalists and scientists who have the power.

    I think you’re right that scientists sometimes overstate their case. But I don’t see a lot of that. I see much more unwise dismissiveness.

  3. Well, much of Taubes’ article is about the kind of problems associated with overstating those cases…the erroneous conclusion that HRT helps, when it actually kills women…the downside of folate supplementation…the panicked switch to transfats after CSPI’s alarums over palm oil in popcorn…and what Taubes explores in his new book, the real result of the bad science that is still believed to support the explosion in carb consumption of the last thirty years.

    I wouldn’t argue for censorship, no, but I would support intelligent use of the data, which ain’t happening.

    For me, it’s hard to argue with the idea that the double-blind trials that are necessary aren’t being done. The conversion of hypothesis into fact means the “trials” are done on the public at large.

  4. Epidemiology is accurate enough. The interpretation of data leaves much to be desired.

    From a clinician, teacher and manager’s perspectives, I wish everyone would please please please realize that every piece of information has its limitations and conclusions drawn by inference cannot be logically assumed to represent complete and unalterable truth.

    Ya, take a pill and learn to live with uncertainty.

Leave a Reply

Your email address will not be published. Required fields are marked *