Epidemiology has lots of critics. In this article, for example, it is called “lying on a grand scale.” Every critique I have read has ignored history. Epidemiologists have been right about two major issues: 1. Heavy smoking causes lung cancer. 2. Folate deficiency causes birth defects. In both cases, the first evidence was epidemiological. Another example is John Snow’s conclusion about the value of clean water. In my experience, epidemiologists often overstate the strength of their evidence (as do most of us) but overstatement is quite different from having nothing worth saying.
Let’s look at an example. Many people think osteoporosis is due to lack of calcium. Bones are made of calcium, right? The epidemiology of hip fractures is clear. In spite of the conventional idea, the rate of hip fracture has been highest in places where people eat a lot of calcium, such as Sweden, and lowest in places where they eat little, such as Hong Kong. (For example.) In other words, the epidemiology flatly contradicted the conventional idea. This was apparently ignored by nutrition experts (everyone knows correlation does not equal causation) who advised millions of people, especially women, to take calcium supplements to avoid osteoporosis. Millions of people followed (and follow) that advice.
Thanks to a recent meta-analysis we now know that experiments and better data firmly support the earlier epidemiology, which suggested that calcium supplements are dangerous. Here are its main conclusions:
In meta-analyses of placebo controlled trials of calcium or calcium and vitamin D, complete trial-level data were available for 28,072 participants from eight trials of calcium supplements and the WHI CaD participants not taking personal calcium supplements. . . .Calcium or calcium and vitamin D increased the risk of myocardial infarction (relative risk 1.24 (1.07 to 1.45), P = 0.004) and the composite of myocardial infarction or stroke (1.15 (1.03 to 1.27), P = 0.009). . . . A reassessment of the role of calcium supplements in osteoporosis management is warranted.
If the epidemiology had been taken more seriously, many heart attacks might have been avoided.
Is this an “anecdote” — a single example — proving nothing? Here’s how you can check. Randomly select a meta-analysis of epidemiological studies. Thousands have been done. Then ask if the results summarized in the meta-analysis appear random. Better yet, randomly pick two meta-analyses. Suppose the first summarizes 5 studies and the second summarizes 6. If the 11 results were shuffled together, how well could you assign them correctly?