Lots of scientists say science is self-correcting. In a way this is surely true: a non-scientist wouldn’t understand the issues. If anyone corrects scientific fraud, it will be a scientist. In another way, this is preventive stupidity: it reassures and reduces the intelligence of those who say it, helping them ignore the fact that they have no idea how much fraud goes undetected. If only 1% of fraud is corrected, it is misleading to say science is self-correcting. A realistic view of scientific self-correction is that there is no reward for discovering fraud and plenty of grief involved: the possibility of retaliation, the loss of time (it won’t help you get another grant), and the dislike of bearing bad news. So whenever fraud is uncovered it’s a bit surprising and bears examination.
What I notice is that science is often corrected by insider/outsiders — people with enough (insider) knowledge and (outsider) freedom to correct things. As I’ve said before, Saul Sternberg and I were free to severely criticize Ranjit Chandra. Because we were psychologists and he was a nutritionist, he couldn’t retaliate against us. Leon Kamin, an outsider to personality psychology, was free to point out that Cyril Burt faked data. (To his credit, Arthur Jensen, an insider, also pointed in this direction, although not as clearly.) The Marc Hauser case provides another example: Undergraduates in Hauser’s lab uncovered the deception. They knew a lot about the research yet had nothing invested in it and little to lose from loss of Hauser’s support. This is another reason insider/outsiders are important.
As far as I can see, when people say that science is self-correcting, they typically don’t mean that specific instances of fraud are detected, but that additional reasearch will not confirm incorrect findings.
True, they mean that a broad range of wrong answers are corrected, not just fraud. When a published result turns out to be wrong, it doesn’t mean it was fraudulent, of course. People who say science is self-correcting have a mental image of science wherein important experiments are repeated by others in attempts to extend the initial results. In these attempts, the veracity of the initial results is confirmed (or not). This view of science isn’t terribly accurate — in some areas of science experiments are so difficult or expensive or diffuse (it’s unclear what’s central and what isn’t) that replication is rare. Anyone who seriously believes science is self-correcting should study the career of Ranjit Chandra.
Still, a lot of useful work comes out of science– maybe the takeaway is that science is more self-correcting than most human institutions. Scary, isn’t it?
Perhaps the takeaway is that science is self-correcting when it matters. For example, med students were told, following Aristotle, that the human liver has a certain shape. Actually it had a different shape. That error took more than a thousand years to correct. But it didn’t matter.
It is scary indeed. Also, for a young scientist starting out, he/she would definitely want to become and insider rather than an outside. The incentive to ignore rather than correct the mistakes of an established insider is very great. And I guess most of the real science is done by those trying to make a name are quite young.
The example of Cyril Burt is an unfortunate one, as he was almost certainly framed. Arther Jensen and W.D. Hamilton came to his support. Leon Kamin is an IQ demagogue staunchly opposed to idea the heritability of IQ.
Maybe this explains why you can not see many journal articles w/o a subscription. The outsiders who have nothing to lose would be more apt to shred their arguments.
“Perhaps the takeaway is that science is self-correcting when it matters.”
Yes. I think you have to take into account your ‘modern Veblen’ thesis that most of what passes for science in universities has neither value nor importance. Errors in those areas will stand for a long time without correction. Errors in the valueable/useful fields will tend to be corrected because there are reputations to made doing so. However, it still may require the old-generation gatekeepers to retire before it is career-positive to correct their errors.
Dennis, Jensen wrote a paper saying that Burt’s data was unreliable. Kamin was the first to point out that some of Burt’s correlations stayed the same to 3 decimal places (e.g., 0.771) as the sample size went from 20 to 50. See
https://www.hum.utah.edu/~bbenham/Phil%207570%20Website/csSir%20Cyril%20Burt.pdf
Gary Taubes accuses nutrition science of clinging to the false notion that fat is bad for you for over 50 years! He claims that evidence disconfirming that hypothesis has been plainly available for many decades, yet generations of scientists go along with obviously false beliefs because… well, he doesn’t really say why. Apparently nutrition scientists are so dishonest and mendacious that they would rather propagate obvious falsehoods than upset the apple cart that is delivering them their grants.
Could science really be this bad?
Hal, I think that science really can be that bad. I’ll give you another example.
Robert Whitaker is sort-of the “Gary Taubes” of psychiatry. Whitaker recently wrote a book called, Anatomy of an Epidemic: Magic Bullets, Psychiatric Drugs, and the Astonishing Rise of Mental Illness in America. The book details the devastating effects of psychiatric drugs (antidepressants, antipsychotics, antianxiety agents, etc.). The book is meticulously documented and is quite an impressive piece of scholarship. The psychiatric establishment has had a love affair with psychotropic drugs since 1952, when chlorpromazine was introduced. So we’re going on 60 years here.
It’s interesting to note that Whitaker used to be an insider, of sorts. He worked as director of publications at Harvard Medical School. He also founded a publishing company that covered pharmaceutical clinical trials. He also worked as a science journalist for the Boston Globe. Because of his insider/outsider status, he was able to publish a scathing and cogent critique of psychopharmacology.
I think you’re right that “science is self correcting when it matters.”
All your examples are from social science. I’m a molecular neuroscientist who works in international development now, and I have seen both extremes. Frauds thrive in a non-peer reviewed “foreign aid” world, and succumb to their schemes in neuroscience much faster than do social scientists. Maybe it’s because people do repeat their experiments, because it at least matters in order to publish the next result.