The blog Science-Based Medicine ran a long critical comment about my recent Boing Boing piece (“Tonsillectomy Confidential: doctors ignore polio epidemics and high school biology”) followed by a back-and-forth (my reply, their reply to my reply, on and on) in the comments.
The exchange had three curious features.
1. In Tonsillectomy Confidential, I described how Rachael critically evaluated what a naturopath told her:
Rachael and her son went to see a naturopath that a neighbor had recommended. The naturopath was especially knowledgeable about nutrition and supplements. After an hour interview, she suggested Vitamin D3 (5000 IU/day), a multivitamin, Vitamin C (500 mg/day), and powdered larch bark. Rachael searched for research about these recommendations. She found many studies that suggested Vitamin D might help. Her son is a pale redhead and used sunblock a lot. It was easy to believe he wasn’t getting enough Vitamin D. Because Vitamin D won’t work properly without other vitamins (called co-factors), a multivitamin was a good idea [Rachael discovered during her research]. Rachael found studies that implied that a multivitamin was very unlikely to be very harmful. She found few relevant studies about Vitamin C. Maybe extreme claims about its benefits had scared off researchers — “Linus Pauling burned that bridge,” said Rachael. But she took the Vitamin C recommendation seriously because the naturopath had made other reasonable recommendations, the recommended dose was not large, Vitamin C is easily excreted in urine (in contrast to building up in the body), and Rachael had never heard of anyone having trouble at that dose. The naturopath had said that larch bark had reduced ear infections in children with chronic ear infections. A little bit of theory supported this, Rachael found, but overall the larch-bark research was “dodgy,” she said.
This was described by the Science-Based Medicine critic (Steven Novella) as “blatantly not evidence-based”.
2. In my first reply to the criticism, I wrote:
In other words, there is some evidence supporting the value of larch bark (“early laboratory evidence”) and some evidence (“a more recent study in mice”) not supporting the value of larch bark. Given this, to say “available scientific evidence does not support claims . . .” is false. An accurate statement is that some evidence does and some evidence doesn’t.
This got the following reply from a second critic (David Gorski):
No, Seth. Note two words Steve used, “in humans.” Steve was quite correct. If there is only a preliminary animal study, even if positive, that does not support the efficacy of larch bark in humans.
Apparently Gorski thinks animals (e.g., rats) and humans share no DNA. A few sentences later, contradicting himself, he notes that animal studies are used as screening tests.
3. Finally there was this, from Steven Novella:
It is fine to search for information yourself, and no one here is advocating “blind trust” in anyone. We are all activist skeptics. But it is folly to substitute one’s own opinion for that of experts who have spent years mastering a subject.
What a lovely motto for this blog: “It is folly to substitute one’s own opinion for that of experts who have spent years mastering the subject.” And, after all that study, think animals and humans share no DNA.
Maybe they could read this article:
https://yudkowsky.net/rational/bayes
and understand what evidence really is?
Science is not about picking cute statements that serve as epistemic axioms. (“correlation is not causation”, “anecdotes are not data”, “rat studies say nothing about humans”).
It’s about evaluating evidence in the context of Bayesian inference. It’s infuriating to see rational people not understand this.
Steven Novella is a professor of neurology at Yale Medical School. He’s an entrenched insider. His status necessarily limits the scope of his skepticism. I occasionally read his blog, and it’s fine as far as it goes. But he usually focuses on easy targets.
Yeah, I agree. That’s a good way of putting it. Evidence pushes belief in Statement X up (toward agreement) or down (toward disagreement).
I didn’t know that. It makes him more quotable.
I found four things odd in the whole exchange.
First, I couldn’t tell whether the folks at the SBM blog think Rachael should continue her current approach (nutritional), or should withdraw those efforts and instead choose surgical removal of the tonsils. (After all, the nutritional approach had been suggested by one of those ‘quack naturopaths’.)
Secondly, they make no distinction between levels of risk. If one’s femoral artery is cut, it would be poor personal risk management to insist, upon arrival at the ER, that the doctors first search PubMed for the perfect solution before taking any action. On the other hand, for a situation where a mother has been coping with a child who has had six sore throats in a year, the risk of delaying surgery for one more month while trying a nutritional approach seems to me to be low.
The third problem is that they fail to acknowledge that experts disagree. Every one of us has to curate which experts to believe. The most obvious example is in the area of nutrition.
And finally, probably due to legal issues, I suspect that most doctors feel they cannot recommend nutritional approaches. For example, I had a year-long issue which caused numbness in my right foot. My doctor said he could prescribe a Big Pharma product which had a low probability of success and a history of negative side-effects, but he would do so reluctantly and only if I insisted. I appreciated his advice and chose not to use that product. What he did not tell me is to look for alternate approaches. I looked online for advice and, yes, I considered advice from people without an M.D. behind their names. Now, after a year of nutritional changes, supplementation, and topical treatments, that condition has mostly cleared. I strongly suspect that my efforts caused that condition to reduce, rather than it clearing by its own. I can’t prove it and since there is no money to be made by producing a trademark product, it is unlikely that research will be funded for trials on this type of solution.
Good point. The question “what should Rachael have done?” wasn’t answered. And Novella’s claim that what actually happened (her son had no more sore throats) provided “evidence against” Rachael’s approach couldn’t be more ridiculous. What are they putting in the water at Yale Medical School?
I used to have serious problems with my tonsils; whenever I got sick they’d start to hurt & stayed that way for weeks if not months. And I was sick a lot; I’d have this several times per year …
It stopped when I started taking fish oil about 18 months ago (I also started weight training at the same time, but I’ve had periods where I exercised a lot before; the fish oil was new). I take around 3-5 g of fish oil per day.
In those 18 months I was sick once – just a case of the sniffles; but the tonsils never got inflamed and I was well again after a week. No lingering pain in the throat! I don’t think I can communicate how awesome that is.
These days I also take Vitamin D3, but I’ve only been doing so for 6 months; the tonsil problems were gone long before that.
Request: Next do “Botulism Confidential.”
Thought you’d find this article in today’s WSJ of interest:
“What if the Doctor Is Wrong?”
“Hardeep Singh, chief of the health policy and quality program at Michael E. DeBakey VA Medical Center in Houston, says a growing number of centers are requiring an internal second review of pathology reports to prevent misdiagnosis. If the second opinion differs markedly, a third opinion may be necessary to get a consensus on what course of treatment is best.”
https://online.wsj.com/article/SB10001424052970203721704577159280778957336.html
That nicely puts “It is folly to substitute one’s own opinion for that of experts who have spent years mastering the subject.” into perspective. We’re supposed to trust the experts when the experts won’t trust the experts?
Do the doctors at SBM think we’re all fools?
(I think they do, but you should draw your own conclustions…
“Apparently Gorski thinks animals (e.g., rats) and humans share no DNA. A few sentences later, contradicting himself, he notes that animal studies are used as screening tests.”
You know when you read something, and it’s as if the words themselves manifest into a face-palm and make you exhale with a mixture of pity and impatience? That’s exactly what I feel when reading something that lacks any logical credibility. Objective achieved with those two sentences there.
I’m not sure why you think Gorski’s quote is conclusive evidence that he doesn’t think humans and mice/rats share any DNA. He simply brought up the point that drugs/compounds/botanicals that produce a biologic effect in mice might not produce the same (or indeed any) biologic effect in humans. Mice and humans have many homologous genes, but rodents have massively different CYP450s than humans (these enzymes often help metabolize drugs). So it’s not at all unreasonable to ask for proof of efficacy in, you know, actual humans.
Regarding the “contradiction,” it’s certainly possible for someone to point out that while evidence of drug efficacy in rodents is promising (hence why they are used for screening), it doesn’t mean that the drug is guaranteed to work in humans (even though we share TEH DNAZ).
And yeah, if you couldn’t do a proper literature search/don’t have an idea about how drugs are tested, you should probably refrain from bringing the snark when corrected by someone who actually knows something about the subject.
“What are they putting in the water at Yale Medical School?”
Hubris
There are arrogant and unreasonable physicians. There are also arrogant and unreasonable bloggers and commenters. I don’t think that being smart and accomplished always = arrogant. Most doctors and researchers are genuinely good people seeking to do right by others. Just like, presumably, most bloggers are genuine and good people.
I think the esteemed doctor is well aware that mice and human genomes share similarities, that kind of snark does little to bolster your argument. I don’t understand how his statement about rat studies are not equated human evidence is contradicted by sharing DNA. We share DNA, but have vastly different anatomies proteomic expression, et al. That’s why we do clinical trials in medicine: bench research, animal testing, then small scale human, then large scale human studies. There are examples of promising drugs in animals that are ineffective or harmful in humans.
And for every story of fish oil curing chronic sore throats, there are examples of naturopathic approaches being harmful to the point of mortality (see Steve Jobs). Both are anecdotes and not a substitutes for an evidence based approach.
I know this sway no one in the blogosphere, but I hope it will dampen the shrillness of the debate.
The fact that humans and rats share DNA does not mean that conclusions reached in rats are always relevant in humans. The patent office is littered with drug candidates that behaved very differently in humans than they did in model organisms like rats.
I think Novella’s statement is absolutely accurate. Having an opinion is great, but trusting experts is a necessary byproduct of living in a society based on specialization. Reasoned challenge of experts’ viewpoints and (especially) examination of their motives and biases is necessary. However, adopting an anti-expert viewpoint is counterproductive.
“Apparently Gorski thinks animals (e.g., rats) and humans share no DNA. ”
Where on earth did you get this idea? Of course mice are used in screening tests…that’s where we start. Then we move to animals that are closer, genetically to humans and finally to human trials. A positive result in an animal study is no indicator that the treatment will work in humans, but a negative result will usually inhibit further research. The point is to see if it’s POSSIBLE the treatment will work. A mouse study is not conclusive of anything except that further research is needed.
Dr Novella is also the lead host of The Skeptic’s Guide to the Universe, the biggest podcast in skepticism and is known as an excellent science educator, especially in evaluating research. He’s also a huge proponent of Bayesian inference which he has spoken on at length. The point of his article was not to advocate for or against tonsillectomy, but to evaluate your understanding of the research, which he found flawed because of lack of expertise and bias. Dr Novella is a neurologist and therefore would not presume to make a recommendation for or against tonsillectomy, but would presume to be able to get a consensus of what the research shows is the best treatment. His podcast can be excellent tutorial for evaluating research and I highly recommend it.
Do you honestly think that a little DNA homology is all that’s needed to ensure efficacy of medical therapies transfer between species? There is more to it my friend. Post transcriptional splicing? I suppose they never taught you about that at Berkeley? Your continual oversimplification of complex issues in an attempt to achieve folksy populist charm may work on many, but not us “experts”
Your second point is pretty dishonest. Gorski didn’t say there’s no shared DNA in mice and humans. He didn’t say that AT ALL. You just made that up.
Kirk: You’re creating quite a false equivalence with the modest statement “Experts disagree,” when the relative weight of evidence for varying viewpoints is so disproportionate. There is essentially zero good evidence for vitamin or nutrition based treatment for recurrent sore throats, compared to the reams of data, both pro and con, for tonsillectomy.
Seth, the reason Novella demurred on the question of whether your friends child should proceed with surgery (other than the folly of trying to diagnose/treat over the Internet) was explicitly stated: he wanted to point out a clear example of biased reading of the evidence, not make a treatment pronouncement outside of his specialty.
And, sorry, the unqualified litany of ostensible risks of tonsillectomy without acknowledging (or being aware of) countervailing evidence or problems with the suggested risks raises the question of bias. I’ll give you the credit of assuming that any bias is accidental, due to cursory or inexperienced reading of the data, rather than intentional (as is often the case with, say, anti-vaxxers).
“Countervailing evidence” to the conclusion that tonsils are part of the immune system? Care to say what that is? This comment raises the question of education — whether you have been educated well enough to know (a) how the immune system works and (b) the evidence behind that understanding.
“A little”? It’s more than a little. It’s obvious that there are great similarities between rat physiology and human physiology. If they are not based on DNA overlap, what are they based on?
Not so. The many similarities between mice and humans mean that if Statement X is true for mice, X is more likely to be true for humans than if X is not true for mice.
Could you explain why you think anecdotes are not data? I hear that a lot. But it seems to be more of an insult (“you stink!”) than an empirical statement.
That’s not what he said. He made a stronger point — that a positive result in mice does not make a positive result in humans more likely (his exact words: “If there is only a preliminary animal study, even if positive, that does not support the efficacy of larch bark in humans”). That’s absurd, but that’s what he said.
I believe that you should look at evidence whenever possible — at least where doctors are concerned. Is that “anti-expert”? If so, what is counter-productive about it?
Curious. Here’s what Gorski said:
Here’s what I said in response:
I reached that conclusion because I could not think of another explanation for the pattern of results that Gorski refers to: results with animals tell us nothing about what will happen with humans. In other words, knowing that X is true for rats tells us nothing about whether X will be true for humans.
Let’s assume it’s true that animal results tell us nothing about what will happen with humans. Can you think of another plausible explanation for this pattern of results besides zero DNA overlap?
“Not so. The many similarities between mice and humans mean that if Statement X is true for mice, X is more likely to be true for humans than if X is not true for mice.”
Yes, and therefore more research is needed. It is not conclusive evidence of efficacy in humans. It shows a possibility of efficacy. Which I stated in my comment.
@ShadowfaxMD
You say, “There is essentially zero good evidence for vitamin or nutrition based treatment for recurrent sore throats, compared to the reams of data, both pro and con, for tonsillectomy.” Color me skeptical. Wouldn’t this have been highlighted by one of the editors for the Science-Based Medicine blog? All they would need to do is cite the studies. Can you list those PubMet papers which show SBM-approved double-blind studies indicating no efficacy?
Since you have so bravely joined the give-and-take over here, I would like to post a few questions to you. If this situation was happening to a child in your extended family, would you counsel scheduling immediate surgery once the physician communicated that opinion? (I assume you are aware of the risks of surgery.) Or would you say to the parents, ‘There is an alternative, and I’ll be the first to say it may be a long-shot, but, you know, the risks are low for delaying surgery for a month. You might consider a nutritional approach like the one which reportedly worked for Rachael and her son.”
Do you think Rachael should stop the nutritional approach and schedule surgery?
And finally, I’d like to make a point about experts. Experts are great. I was one in a particular field several years ago. And yet, sometimes we ignore experts when the risk is low. The major illustration is nutrition. I’m guessing from your handle that you’re an MD or somewhat associated with the medical field. If so, are you an MD who treats patients exclusively about nutritional problems? My guess, no, that’s not your exclusive focus. You’re not a nutritional expert and thus you probably base your Way of Eating on one of the eating plans developed by an M.D. Am I right? Which one? Dr. Atkins, Dr. Ornish, Dr. Willett ? And here’s the key issue . . . do you completely and absolutely comply to that diet? Or do you subtract something, or adjust a ratio, or add something? For example, it may be a low-salt diet but you like salt and your own research indicates there’s nothing wrong with a reasonable amount of salt. If so, then you’re ignoring an expert. And why do you ignore an expert? Because the risk is low and you’ve done your homework.
Nice straw man ya got there, Seth. Let’s put it this way. A positive animal study might mean it’s more likely that a treatment will work in humans, but it is still not evidence of efficacy in humans. (Amber K’s comment was spot on.) In other words, it’s suggestive that a therapy might work, not evidence that it will. That’s why no clinician bases therapy decisions in humans on animal studies.
I can only conclude that you really don’t understand this or being deliberately obtuse about this point.
As for your comment about my supposedly thinking that rats and humans don’t share DNA, now really. That was just plain dumb. I’m sorry, but there really is no other politer way to put it. As others have pointed out, just because we share DNA does not necessarily mean that results in rats will translate into humans.
Huh? I fail to understand this. You seem to be using “evidence” in a way I have never encountered. The usual meaning of “evidence” is this: Observation X is evidence for Statement A if and only if Observation X makes Statement A more plausible. With this usage, anything that is “suggestive that a therapy might work” (= makes it more plausible that a therapy will work) is also “evidence” that it will work. Could you explain what you mean by “evidence” in your sentence?
Of course not. But that’s not what you said. You made a stronger statement:
Whereas I think that a preliminary animal study, if positive, does support (= make more plausible) the efficacy of larch bark in humans. My belief is based on the many many similarities that have been observed between lab animals and humans (e.g., any physiology textbook). Can you explain the basis of your belief (“If there is only a preliminary animal study, even if positive, that does not support the efficacy of larch bark in humans”)?
Curious. You think such terminology improves your argument?
Evidence of efficacy in humans would be the observation of an effect after administering to humans. Administering to a rat can only provide evidence of efficacy in rats. Seeing an effect in rats might lead you to believe that there will be a similar effect in humans but it is not evidence of such an effect. This is why it is suggestive but not evidence of efficacy. There have been plenty of therapies that looked promising based upon animal studies which have later failed when tested on humans. If evidence of efficacy in rats truly was evidence of efficacy in humans there would be no such failed transfers.
You’re being unfair here, whether deliberately or accidentally. Everything is made of the same particles and works in the same way, so everything is technically evidence for everything. In the field of evidence-based medicine, we must thus restrict ourselves to useful evidence. Experience has found that the results of a small number of animal studies do not constitute useful evidence for what works in humans, merely a promising line of attack to conduct further research and build up a body of evidence that is useful.
Care to provide evidence for that?
You mean the statement by David Gorski that I have been complaining about is true by definition? I thought we were discussing science, not mathematics.
I think there is a Wording issue here.
“evidence” has two meanings.
1) any kind of information
2) conclusive proof
In the jargon of SBM folks evidence means (like in court) conclusive proof. Hence their use of the word.
Miriam webster give both definitions here
https://www.merriam-webster.com/dictionary/evidence
Yes, different metabolism and breakdown of compounds. Differences in physiology, etc. Extending your logic would say, for example, cows can digest celulose, and since we share a lot of DNA with cows, therefore we should be able to digest cellulose too. Since we can’t digest cellulose we must have zero DNA overlap with them, its the only explanation.
Since our metabolism and other aspects of physiology are heavily dependent on our DNA, I fail to see how this is a substantially different explanation.
Here’s what that linked-to definition says:
1a : an outward sign : indication b : something that furnishes proof : testimony; specifically : something legally submitted to a tribunal to ascertain the truth of a matter
2: one who bears witness; especially : one who voluntarily confesses a crime and testifies for the prosecution against his accomplices
Not the two meanings you give.
Seth, these guys aren’t the bad guys. They are trying to help you understand what they mean.
You aren’t addressing many of the points made in response to you.
1. If I kill a mouse, is this evidence that I will kill a man?
2. What is your explanation for things that affect different animals in different ways?
3. What if the Doctors are being honest when they tell you what they mean by the word evidence?
3.
Seth,
I think this can help you understand the meaning of evidence as we’re using it.
“scientific evidence:
Results when a theory or hypothesis is tested objectively by other individuals such as in an experiment or in a controlled environment.”
The mouse experiment is testing the hypothesis that the treatment is efficacious in mice vs the null hypothesis, the treatment has no effect in mice. The results from the experiment are evidence of whether or not the null hypothesis or the alternate hypothesis is disproved. If we disprove the null hypothesis (i.e., the treatment is efficacious in mice) then we will set up further experiments, each testing a new set of hypotheses.
It is not until we get to a controlled, double-blinded study in a human population of significant size that we are testing the hypothesis, “this treatment is effective in humans.” Therefore any evidence gathered until that time is not considered evidence that the treatment is effective in humans. It is only evidence for the hypotheses that were tested in each experiment.
So in a nutshell, each step of the process infers a possibility of success in the next, but is not evidence for it. Why? In order to make the statement that evidence of efficacy in mice is evidence of efficacy in humans, we would have had to set up an experiment to test that hypothesis and proved the null hypothesis invalid. We have no need to do that, however, because of the large number of studies that have proved efficacy in mice and not in humans (in other words, it’s already been done).
I hope that makes things more clear.
Doug wrote:
I disagree. Here are three books that illustrate why experts should be treated with deep suspicion:
A Random Walk Down Wall Street, by Burton Malkiel. The author provides convincing evidence that stock analysts (and other Wall Street professionals) don’t know what they are doing. Note that this is a separate issue from corruption.
Mistakes Were Made (But Not by Me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts, by Carol Tavris and Elliot Aronson. What’s particularly interesting about this book are the examples where experts are shown to be dreadfully, horribly wrong — but then stubbornly refuse to admit it.
Anatomy of an Epidemic: Magic Bullets, Psychiatric Drugs, and the Astonishing Rise of Mental Illness in America, by Robert Whitaker. The author argues that “expert” psychopharmacologists have produced drugs that are not only ineffective but are actually harmful, though most mental-health professionals continue to swear by them.
Not sure what you mean. But yes, while we share some DNA and aspects of our physiology with other species, we also have substantial differences. Thus whatever effect a certain drug or treatment may have in a rodent, there is by no means any guarantee that it will have the same effect in humans. In some cases it might even have the opposite effect. Animal studies are important in that they allow us to test the plausibility of certain treatments and have allowed for a great many medical advances. But they have a limit as most doctors and scientists know. Which is why translational research is important for bridging the gap between basic science and medical treatment. But you just can’t jump to the conclusion that because something may work in rodents, then you have evidence that it works in humans.
What is the evidence for this claim? Which would shock epidemiologists, by the way.
“What is the evidence for this claim? Which would shock epidemiologists, by the way.”
Sigh. I tried, Seth. I really tried.
No epidemiologists would be shocked by rigorous experimental practices to remove subjective bias. It seems to me you are just cherry-picking statements from everyone’s comments to try to mine some kind of semantic victory for yourself. This is petty, purposefully obtuse and I am done playing this game. Good luck to you.
No, I’m trying to learn something. I’ll never know where your beliefs come from — what is behind them — unless I ask. I pick one statement (what you call “cherry-picking”) to make responding more manageable.
I referred to 1a vs. 1b definitions.
The 1b definition does not mean “conclusive proof”. Not even close. It just means the sort of stuff that is called “evidence” in legal proceedings.
Humans have a gallbladder, rats don’t. Rats have whiskers, humans don’t. I could go on. I mean, seriously? I don’t think genetics work the way you think it does.
It’s very simple.
Some things work the same in mice and humans, yet some things don’t.
If you are really trying to learn something here, then why not engage the points that people have made to you. Instead of deciding what you think other people mean, why not listen to what they tell you they mean.
Also kind of strange that the person above gave three books about how experts fail us. I guess those authors are experts on the matter?
Treatment that works well for mice does not automatically work for humans. There are tons of failed treatments for humans, which worked on lab rats. Which is bad news for my son…
Such a long and an interesting conversation…that seems to be winding down. I had my tonsils removed forty years ago and can stil see the video in my mind’s eye of my tearful plea to my father:”Don’t let them take me away!”. I remember that the promised course of post-surgery therapeutic doses of ice cream did not materialize as soon as expected. With what I know now ( I utilize alternative and nutritional therapies more so than pharmaceutical and surgical ones), I may have kept my tonsils a bit longer.
In arguments, it is easy to keep gnawing at one tree. Gorski responded to Robert’s challenge of the widespread practice of tonsillectomy by suggesting that laymen does not always know how to interpret and prioritize the data. Roberts has taken the bait regarding this minor question of mice and men and evidence and the trap has snapped shut. I must give points to the doctors on this one, but wish Gorski would respond to the spirit of Robert’s original blog: Is questioning standard surgical practices a good thing?
Greg, thanks for your final question (“Is questioning standard surgical practices a good thing?”). What I am really questioning is how evidence-based medicine is done. They omit a lot of evidence. Often the omitted evidence is much more negative than positive. This leads to a huge positive bias in what outside observers see — the treatment under review appears much better than it actually is. Yet the possibility of such a bias is never mentioned.
That’s why rat studies help us predict what will happen when a similar experiment is done in humans. They don’t provide certainty, of course — the results of an animal study doesn’t allow us to predict with certainty the results of a human study — but they are better than nothing. This is why Gorski’s complete dismissal of a “preliminary animal study” is so strange.
Last year I put a lot of compost in the ground and my carrots grew much bigger than they had before.
By your argument, since humans share a great deal of DNA with carrots (about 20%), this is evidence that putting a human in compost will cause them to grow bigger.
Suppose someone suggested that this was not evidence at all. We could then just copy-and-paste your argument:
Now this argument is clearly absurd. There are hundreds of plausible explanations for why this carrot result is not evidence for the conclusion, and it is a strawman to say that the only possibility would be if there were “zero DNA overlap” between carrots and humans.
On the other hand, if you do actually believe that my “carrot study” above *is* evidence that putting a human in compost will cause them to grow bigger, then you are using such a definition of “evidence” that is so useless to research as to be virtually meaningless, and, in any case, this is not the kind of “evidence” that a doctor should rely on when prescribing medicine.
Alex, I would like to add another book to your list about being wary of experts ‘Future Babble’ by Dan Gardner. It examines not only the so called experts but also why we often believe them.
Okay, Sam Fen, two questions:
1. Let’s assume results from Organism X (carrots, perhaps) tell us exactly nothing about what will happen when humans get the same treatment. What’s your plausible alternative to the zero DNA overlap explanation?
2. What did you think of the main point of my article — that “evidence-based-medicine” practitioners ignore too much evidence?
Sam Fen, if you answer my Question #2, I would be happy to answer yours.
Clinical practitioners should only use evidence of efficacy in the species they are treating. Researchers can use evidence in similar species to guide their efforts. That’s David Gorski’s point that you seem to have completely missed.
You mean Gorski’s point was true by definition (of “evidence of efficacy”)? Huh. I thought we were discussing science, not law.
In response to the above question number two.
I’m sure you would agree that not all the studies in the literature stand equally. Therefore doctors and scientists must choose which studies to believe. For this they need to have a consistent system to help separate the good from the bad. This system is always evolving and getting better.
Some of the principles that have come out of this system are:
human studies are more relavent than animal studies.
Large studies outweigh small studies.
Blinded studies outweigh non blinded studies.
Double blinded studies outweigh blinded studies.
Effects with explainable mechanisms outweigh effects with unexplained mechanisms when dealing with small effect sizes.
I agree with your first statement (not all studies equal). However, your second sentence (“therefore doctors…”) doesn’t follow from it. Instead of “choosing what studies to believe” doctors and scientists can assign each study a degree of belief. And consider all of them. Not ignore any of them.
In my experience this is what scientists usually do. Only doctors and evidence-based medicine advocates and the like promote black-and-white all-or-nothing thinking about evidence. Bayesian statistics is a formalization of what most scientists do.
I can’t possibly answer that, not knowing the majority of evidence-based-medicine practitioners. How can I possibly make a generalization like that based on your anecdotes?
But I was dealing with a very specific point, not an abstract one: should a family doctor make a medical recommendation based on one rat study?
If you knew more about evidence-based medicine, you would know that the situation I described in Tonsillectomy Confidential is typical: A vast amount of evidence is ignored. Even if you didn’t know that, you could comment on the particular example I described.
You haven’t answered any of my questions yet, you just keep asking more of your own.