People who believe in “evidence-based medicine” say that double-blind clinical trials are the best form of evidence. Generally this is said by people who know very little about double-blind clinical trials. One reason they are not always the best form of evidence is that data may be missing. Nowadays more data is missing than in the past:
By [missing data] he [Thomas Marciniak] means participants who withdrew their consent to continue participating in the trial or went “missing” from the dataset and were not followed up to see what happened to them. Marciniak says that this has been getting worse in his 13 years as an FDA drug reviewer and is something that he has repeatedly clashed with his bosses about.
“They [his bosses] appear to believe that they can ignore missing and bad data, not mention them in the labels, and interpret the results just as if there was no missing or bad data,” he says, adding: “I have repeatedly asked them how much missing or bad data would lead them to distrust the results and they have consistently refused to answer that question.”
In one FDA presentation, he charted an increase in missing data in trials set up to measure cardiovascular outcomes.
“I actually plotted out what the missing data rates were in the various trials from 2001 on,” he adds. “It’s virtually an exponential curve.”
Another sort of missing data involves what is measured. In one study of whether a certain drug (losartan) increased cancer, lung cancer wasn’t counted as cancer. In another case, involving Avandia, a diabetes drug, “serious heart problems . . . were not counted in the study’s tally of adverse events.”
Here is a presentation by Marciniak. At one point, he asks the audience, Why should you believe me rather than the drug company (GSK)? His answer: “Neither my job nor (for me) $100,000,000’s are riding on the results.” It’s horrible, but true: Our health care system is almost entirely run by people who make more money (or make the same amount of money for less work) if they exaggerate its value — if they ignore missing data and bad side effects, for example. Why the rest of us put up with this in the face of overwhelming evidence of exaggeration (for example, tonsillectomies) is an interesting question.
Thanks to Alex Chernavsky.
How much missing data is there for home remedies circulated on blogs like this one? If 100 people decide to try out honey before bedtime, what percent will report back and be included in a post? Does the answer look closer to ’0 or 1%’ or ’99 or 100%’?
I became a great cynic about modern medicine when I read about the statin trials when 15% (if my imperfect memory serves) of the treated group gave up in the first couple of weeks, and were thereafter just quietly expunged from the stats. Presumably – but we’ll never know – they gave up because they already were suffering from adverse side-effects. That meant that all predictions of the proportion of the population who would so suffer were effectively lies. It also meant that the most interesting proportion of the subjects of the experiment were deleted. After all, those who gain by statins – insofar as they exist – are just statistical facts, but people who suffer are actual, identifiable humans, to whom one could apply, if I dare use the word, science.