Last night, at a Vietnamese restaurant, I had an avocado shake for dessert. On the way home I stopped at a Chinese bakery and got garlic pork cookies. Had science, like cooking, been invented more than once, what would other scientific traditions — other ways of doing science — look like? My guess is they would not include:
1. Treating results with p = 0.04 quite differently than results with p = 0.06. Use of an arbitrary dividing line (p = 0.05) makes little sense.
2. Departments of Statistics. Departments of Scientific Tools, yes; but to put all one’s resources into figuring out how to analyze data and none into figuring out how to collect it is unwise. The misallocation is even worse because most of the effort in a statistics department goes into figuring out how to test ideas; little goes into figuring out how to generate ideas. In other words, almost all the resources go toward solving one-quarter of the problem.
3. Passive acceptance of a negative bias. The average scientist thinks it is better to be negative (”skeptical”) than positive when reacting to other people’s work. What is the positive equivalent of skeptical — a word that means appreciative in a “good” way? (Just as skeptical means disbelieving in a “good” way.) There isn’t one. However, there’s gullible, further showing the bias. Is there a word that means too skeptical, just as gullible means too accepting? Nope. The overall negative bias is (male) human nature, I believe; it’s the absence of attempts to overcome the bias that is cultural. I used to subscribe to the newsletter of the Center For Science in the Public Interest (CSPI). I stopped after I read an article about selenium that had been prompted by a new study showing that selenium supplements reduced the rate of some cancer (skin cancer?). In the newsletter article, someone at CSPI pointed out some flaws in the study. Other data supported the idea that selenium reduces cancer (and showed that the supposed flaws didn’t matter), but that was never mentioned; the new study was discussed as if it were the only one. Apparently the CSPI expert didn’t know about the other data and couldn’t be bothered to find out. And the CSPI writer saw nothing wrong with that. Yet that’s the essence of figuring out what’s good about a study: Figuring out what it adds to previous work.
My earlier post about another bit of scientific culture: the claim that “correlation does not equal causation.”
I’m just randomly coming up with ideas, but the skeptical/gullible divide is possibly due to the fact that “positive acceptance” is the default behavior for humans when encountering a proposition, and that a negative attitude towards rejecting a proposition is the default as well (consider the typical reaction to having an urban legend debunked — more often than not, it’s considered rude).
That is, that “positive rejection” (skepticism) and “negative acceptance” (gullibility) are attitudes that are explicitly opposite to normal human behavior, which is the reason they have names, and are valued so highly. If that makes any sense.
‘Science’ has been invented more than once, but we on the western hemisphere do only accept our kind of science.
Other Science like the energy knowledge of the chinese are just disregarded as not being science at all, because they do not fit in our ’scientific’ thinking.
Therac, “skepticism” and “gullibility” are themselves opposites –they can’t both be “attitudes that are explicitly opposite to normal human behavior.” I assume you mean the value system inherent in the words is what’s opposite.
I think they have names because people often praise what they call skepticism and criticize what they call gullibility. Their opposite-valenced counterparts — over-skepticism and under-gullibility — don’t have names because the underlying values (it’s bad to be skeptical, good to be gullible) aren’t expressed very often. You might have something there that X is not praised and Y not dispraised because they already exist — I don’t know.
I”m, ummm, skeptical about all three of these. (And I’m wondering how strong the evidence is that cooking was actually invented more than once. Sure, it could have been, but how can you be sure?)
(1) Dividing lines between those-that-make-it and those-that-don’t are usually somewhat arbitrary in the sense that they can be pushed around without changing the framework, yes; that’s true whether you’re talking about picking the ideas you’re going to go on working with (”the difference between significance and insignificance is not significant”) or actual evolutionary survival-via-fitness-functions (e.g., when I write a genetic algorithm). Still, we have to choose, unless we choose to keep everything, which is not usually cost-effective. I admit that I’m not sure that the issue is always cost-effectiveness; sometimes, as with the placement of an international date line, you have an actual logical necessity. (I think.) But just on cost-effectiveness grounds, I think any culture trying out ideas is going to come up with ways of measuring that look a lot like p-values, and then is gonna have to choose arbitrary conventions for picking the ones to go on with.
(2) Departments of Statistics do less than a quarter of the work, with less than a quarter of the resources; that’s good. Keep ‘em.
(3) Almost all ideas are bad, especially mine. Skepticism is appropriate. (Yes, meta-skepticism applied to claims about the virtue of skepticism is also appropriate, and so recursively to find the least fixpoint thereof, which is “I dunno.” And indeed, I dunno. But I think I’ll post this just the same, in appreciation of ideas of which I am skeptical.)
“Departments of Scientific Tools, yes; but to put all one’s resources into figuring out how to analyze data and none into figuring out how to collect it is unwise.”
I’m not sure I understand what you mean. As you know, statisticans are very interested in how one should design experiments, and how to iterate between data, theory and experiment. The classic textbook by Box, Hunter, and Hunter, “Statistics for Experimenters” is concerned entirely with this topic.
Well, “Statistics for Experimenters” might be the best statistics textbook ever written. It isn’t typical. Look at an average statistics text. Statistics professors have done much more work on how to analyze data than on how to collect it.