There are two mistakes you can make when you read a scientific paper: You can believe it (a) too much or (b) too little. The possibility of believing something too little does not occur to most professional scientists, at least if you judge them by their public statements, which are full of cautions against too much belief and literally never against too little belief. Never. If I’m wrong — if you have ever seen a scientist warn against too little belief — please let me know. Yet too little belief is just as costly as too much.
It’s a stunning imbalance which I have never seen pointed out. And it’s not just quantity, it’s quality. One of the foolish statements that intelligent people constantly make is “correlation does not imply causation.” There’s such a huge bias toward saying “don’t do that” and “that’s a bad thing to do” — I think because the people who say such things enjoy saying them — that the people who say this never realize the not-very-difficult concepts that (a) nothing unerringly implies causation, so don’t pick on correlations and (b) correlations increase the plausibility of causation. If your theory predicts Y and you observe Y, your theory gains credence. Causation predicts correlation.
This tendency is so common it seems unfair to give examples.
If you owned a car that could turn right but not left, you would drive off the road almost always. When I watch professional scientists react to this or that new bit of info, they constantly drive off the road: They are absurdly dismissive. The result is that, like the broken car, they fail to get anywhere: They fail to learn something they could have learned.
Addendum. By “too little belief” I meant too little belief in facts — that this or that new observation has something useful to tell us. Thanks to Varangy, who pointed out that there is plenty of criticism of too little belief in this or that favored theory. You could say it is a kind of conservatism.
This post makes you sound a lot more like Aunt Dierdre than your recent spat would suggest. You two should kiss and make up; you’d make good intellectual allies.
Matthew,
You mean McCloskey made similar points somewhere? Where?
What do you think of Rupert Sheldrake? This episode of CBC Radio “Ideas” program put a lot of emphasis on the way the scientific establishment (notably, the editor of Nature) ostracised him after he published A New Science of Life in 1981 — admittedly, a book with some fairly far-out ideas.
Here’s a podcast:
https://www.cbc.ca/podcasting/pastpodcasts.html?45#ref45
The whole How To Think About Science series that Ideas has been broadcasting is well worth a listen.
In other words, correlation is strongly correlated with causation.
Too little belief is always preferable to too much. If someone else comes along and does it again, there’s your second chance. If you start acting on every Tom, Dick and Harry’s theory, you’d be taking 5 types of Tibetan berry juice by now.
Failing to learn something you could have learned is not a mode of failure for modern scientists. There will always be more than you could ever learn. You’re never alone. On the other hand, building a bridge on a foundation that will eventually have to be removed is pretty much a complete waste.
Your car example doesn’t make sense, but professional scientists react by considering results false first because important new results are rare. Any graduate student can tell you this, because while to them every new feature is a mystery the post doc remembers you get crazy results with only one crossed wire.
The possibility of believing something too little does not occur to most professional scientists, at least if you judge them by their public statements, which are full of cautions against too much belief and literally never against too little belief.
I couldn’t disagree more. Look at the theory of anthropogenic global warming. The second anyone professes too little belief, they are ostracized and branded a heretic — whoops — I meant to write skeptic. Which, ironically, has become a pejorative in the scientific community.
Seth,
It’s not the same, but it reminded me of some of the methodological issues she has written about over the years. Check out her paper “The Secret Sins of Economics.” You can find it here: https://www.prickly-paradigm.com/paradigm4.pdf
Read the discussion of statistical significance and the mammogram example starting at the bottom of page 48 and following.
In his book The First Three Minutes, Steven Weinberg explains his earlier rejection of the Big Bang Theory: “Our mistake is not that we take our theories too seriously, but that we do not take them seriously enough. It is always hard to realize that these numbers and equations we play with at our desks have something to do with the real world. Even worse, there often seems to be a general agreement that certain phenomena are just not fit subjects for respectable theoretical and experimental effort.”
So it is not quite true scientist do not think about what happens when we do not trust quite enough on scientific results. Another recent example is Max Tegmark complain about not taking MWI interpretation of quantum mechanics serioulsy enough or Robin Hanson complain about not taking seriouly enough what most sudies show: there is not a clear relationship more medicine-more health.
Rather than “too little belief” in general, I should have said “too little belief in new facts.” I should have said that no scientist warns against under-inference — not inferring enough from new facts. Although since I wrote that I thought of an exception: In 2007, Bruce Ames et al. wrote a letter complaining that the committee behind a report of nutrition recommendations failed to take seriously enough the evidence before them. The committee wanted better evidence before doing anything.
In the case of theories, as Varangy says, it is different.
Matthew, thanks for the reference.
Pedro, yes, in individual cases scientists complain that this or that evidence isn’t taken seriously enough.
“Too little belief is always preferable to too much.” It took doctors a long time to realize that smoking causes lung cancer. Their objections to the evidence in front of them were often absurd. That is a case in which too little belief was harmful and too much belief would not have been harmful.
By the way, excellent recommendation, Mathew. The joke about double positive sentences enjoyed my afternoon.
“Too little belief is always preferable to too much.” What if the new idea is correct?
Well, I’ve been going around for a while now saying that Bayesian probability theory tells us that there is an exactly correct update which you should make upon new evidence, neither more nor less; and even in cases where we can’t calculate the math exactly, the mere fact that math exists tells us that there is a correct update which has no room in it for our whims, or for “conservatism” if you feel like being conservative.
Judea Pearl has written extensively on the correlation/causation business. You can actually extract some damned impressive evidence off of even noninterventionist experiments, though it takes a sophisticated theory of causation to do it.
The heuristics and biases community has investigated “motivated skepticism“.
So, no, you are not quite a lone voice in the wilderness here – though I agree that it is one of the most important ways that old-style pre-Bayesian Traditional Rationality goes astray. But the Bayesians have noticed the mistake, analyzed it mathematically, investigated it experimentally, etc.
IMO this conservatism is a natural and reasonable correction for the undeniable fact that the majority of new and striking “results” are simply wrong. I’ve noticed a significant number of these in my own field (climate research) in the last couple of years (examples available if you want), they got a lot of publicity but basically every knowledgeable scientist realised (correctly) they were probably wrong at the outset. In fact it seems to me that the professional scientists are being (approximately) Bayesian in requiring strong evidence to overcome (well-justified) prior beliefs that the “new” results seek to overturn.
James, I am surprised to hear that “the majority of new and striking ‘results’ are simply wrong.” I’m not sure what you mean by “results”. Theories, methods, data, conclusions drawn from data?
I was referring to data — that is, observations. Upon encountering new data, the reaction of the average scientist is much more about what you can’t learn from it (e.g., “correlation does not imply causation”) than what you can.
You believe that most new data is “simply wrong”? Wrong in what sense? And why do you believe this?
“Wrong” in that it does not represent the theory that is attributed to it.
In climate change, we have:
Plants emitting methane (AIUI no-one knows yet where this result really came from, but no-one thought it was reasonable and several replications have contradicted it).
Oceans cooled over the last few years (now clearly understood as an artefact of measuring error due to a large number of buoys with a bias).
Ocean circulation slow-down (combination of a rather simplistic analysis and perhaps intrinsic high-frequency natural variability being larger than we thought).
These all had a *lot* of press coverage, and it was IMO entirely correct of scientists to warn against believing them too strongly.
I don’t think most new data is wrong – much data is confirmatory in nature, uncontroversial and right. I think much or most new and striking data are wrong. It’s basically publication bias. But these are the cases where you hear scientists commenting, precisely because they are highly talked-about.
Thanks for the explanation, James. As I say in a later post,
https://sethroberts.org/2008/02/10/how-to-be-wrong-continued/
I see the same bias in areas much different than science. Therefore I don’t think it is caused by science-specific things such as publication bias or press coverage.
My grandfather used to tell me “If the bird book and the bird disagree, believe the bird.”
This is a lesson I have to teach new baby engineers pretty well every year.
Eliezer,
” though I agree that it is one of the most important ways that old-style pre-Bayesian Traditional Rationality goes astray. But the Bayesians have noticed the mistake, analyzed it mathematically, investigated it experimentally, etc. ”
This has nothing to do with Bayesianism.
Spirtes, Glymour and Scheines are not Bayesians. I have no idea about Pearl.
Gustavo
I like this term “twisted skepticism”. It’s more palatable than “dishonest skepticism”.
Justifications for the habit of twisted skepticism, and for specific examples of it, always sound plausible, but are often revealed as rationalization when the same individuals fail to be similarly skeptical of ill-supported notions favored within their community. E.g., no astronomer can remain in good standing while expressing any skepticism that 98% of the universe’s mass/energy is composed of stuff of which no hint has ever been detected in a laboratory. Likewise, none may be skeptical of the faith that gravitation must be the entire explanation for any large-scale phenomenon, or that the Doppler effect must explain all observed red shift, without exception.
Different fields of science have different levels of dogmatism; astronomy’s may be higher than most, paleontology perhaps lower.
I have identified two systematically irrational behaviors common to scientists. First, there is commonly an established theory which is inconsistent with new data. (Perhaps no diagnostic data ever supported it; it may have originated as an honest speculation by a respected elder.) An alternative theory is simpler, accounts equally well for old data, but also predicts the new data. A rational scientist would accept that there are now two theories on possibly equal footing, but this never happens. Instead, the new theory must pass overwhelmingly more stringent tests than the old theory ever did before it may even be considered as a reasonable alternative. Until this occurs, the contradictory data is ignored or discounted.
A related systematically irrational behavior occurs when new data conclusively falsifies a commonly-held theory (or received speculation), but no one has advanced a palatable alternative. The typical response is to ignore, discount, or even actively suppress the new data.
Systematically irrational behavior by scientists has seemed odd enough that I have puzzled over it for years. The best explanation I have identified is that scientists are self-selected from among the population as those who feel a need to know, and to feel that they do know. To go from relying on one theory to considering two feels like going from knowing to only half-knowing. To discard a theory one has lived with feels like going from knowing something to knowing nothing. Both are, evidently, intolerable to most people who choose to become scientists.
The above does not suffice to explain the condition of astronomy.