Via Robin Hanson I found this study of the effects of antioxidant supplements. It studied five (e.g., Vitamins A and C). Overall they were slightly harmful, except selenium.
This isn’t intuitive — why should they differ? — but fits well with previous work:
1. Evidence for benefits of selenium is overwhelming. You can look at a county-by-county map of US cancer rates and see a sharp drop along a certain line in the northeast. The line separates different geology. There is much more selenium in the soil on the low-cancer side of the line. Yet another case where correlation is powerful evidence for causation. An experiment with selenium supplementation found a reduction in cancer.
2. Several years ago, two experiments found Vitamin A supplements increased lung cancer. (Another study.) Later experiments cast doubt on Vitamins C and E. As one of Robin’s readers put it: “two of which were previously well known to be bad for you.”
Given this previous research, which is far more persuasive than the current study, the interesting contribution of the new study is methodological: will a meta-analysis of epidemiological studies reach the right conclusions? Will the signal outweigh the many sources of bias and error? In fact, it did. Again suggesting that severe critics of epidemiology, such as John Ioannidis, go too far.
Serious questions have been raised about the methodology of this meta-study. See https://lesswrong.com/lw/20i/even_if_you_have_a_nail_not_all_hammers_are_the/ for the details. While Robin Hanson brushed them off, he did so because of motivated cognition, and he has not actually looked at the data.
If someone ate nothing but cheese, do you think that a multivitamin supplement would do them more harm or good?