In a science classroom at a middle school I saw a poster about “the scientific method.” There were seven steps; one was “analyze your data.” According to the poster, you use the data you’ve collected to say if your hypothesis was right or wrong. Nothing was said about using data to generate new hypotheses. Yet coming up with ideas worth testing is just as important as testing them.
It’s like teaching the alphabet and omitting half of the letters. Or teaching French and omitting half the common words. While no one actually teaches only half the alphabet or only half of common French words, this is how science is actually taught. Not just in middle school, everywhere. The poster correctly reflects the usual understanding. I have seen dozens of books about scientific method. They usually say almost nothing about how to come up with a new idea worth testing. An example is Statistics For Experimenters, a well-respected book by Box, Hunter, and Hunter. One of the authors (George Box) is a famous statistician.
The curious part of this omission is how unnecessary it is. Every scientific idea we now take for granted started somewhere. It would be no great effort to find where a bunch of them came from.
I think the problem is that you can’t teach how to come up with new ideas. You can tell a pretty standardized story about how to test ideas, but not about how to develop them. I remember reading in a book about the methods of empirical social research something along the lines of:
“Hypotheses can come from reading the previous literature, the researcher’s intuition… pretty much anything goes.”
Which sounds about right to me.
For me one of the valuable ideas which I’ve taken away from G. Box and his writing is his stress that experimentation is a dialog between theory and practice. One generates theories and then one tests them using sound methods. The results from the tests generate new perspectives which get fed back to the original theory and the cycle repeats. Another idea which I remember getting stressed quite a bit in the DOE literature was not to put all ones resources into the first experiment. The first experiment is most likely to reveal that ones thinking about the problem is way off track. The ideas are there, in the background, that the experiments themselves generate the ideas/theories which step by step bring one closer to understanding.
Allen Neuringer and other experimental psychologists would disagree with the statement that “you can’t teach how to come up with new ideas.” Allen’s research paradigm directly addresses this question and finds that variation–the foundation of creativity–can be selected for through operant reinforcement. I remember reading similar studies in human children being selected for creative (i.e., novel) output in art media (painting if my memory is correct).
That said, it is very neglected in the science classroom and literature, but I think it is alive and kicking in the mentor-apprentice model that takes place in most ph.d. labs. Also, see Platt’s famous 1964 Science paper “Strong Inference” for a well-stated explanation of the scientific method.
“You can’t teach how to come up with new ideas.” I think you can. One way to come up with new ideas — that is, increase the chances of this happening — is to do a better job of analyzing your data. To examine it more thoroughly. This will sometimes reveal hard-to-explain anomalies. These anomalies will often inspire new ideas. Note that this particular method was not one of the ones listed in that methodological book LemmusLemmus read.
Seth,
the “you can’t teach…” wording was much too strong, even wrong. It would be more accurate to say that it is harder: My main point was that there is a fairly standardized method to test ideas – formulate the hypothesis, collect data, etc. – and there is no such standardized method for coming up with ideas. I’m definitely not against telling students to come up with ideas by looking closely at the data, introspection, self-experimentation, and so forth. In fact, it would be useful to have an as-exhaustive-as-possible-list in a textbook. The list is going to vary depending on the subject.
(It’s been a long time since I read that book; that was not a verbatim quote.)
As an aside, I once read an article in a German sociology journal that included a long and winding paragraph in which the authors justified not deriving their hypothesis from the previous literature, but simply from everyday observations. I have a funny feeling that one was included in response to a reviewer’s comment.
“The list is going to vary depending on the subject.” Interesting idea — why do you think this?
Seth,
for example, if you’re an astrophysicist, self-experimentation will not help much.
Astrophysicists have much bigger problems these days than not being able to apply self-experimentation. Most of what they theorize about (black holes, neutron stars, dark matter, dark energy) very likely doesn’t exist at all. Everything they see (planets excepted) and everything in between is made out of stuff whose behavior is governed by completely intractable mathematics they prefer not to think about at all.
The scientific method was invented in its entirety by one ibn al-Haytham, a bit more than a thousand years ago. (He is known in mathematics as Alhazen and Alhacen, in different contexts.) He was under house arrest in Egypt, and spent his time in confinement founding the study of optics. He invented science in order to obviate the need to appeal to authority. His book on optics was the standard reference for centuries after, and well known to Francis Bacon.
Hypotheses fall from the sky like rain. Science provides a way to cull the wrong ones. It fails completely if you don’t want to cull the wrong ones.
thanks for the explanation, LemmusLemmus. I think self-experimentation isn’t a basic principle of science, it just is one way to follow a basic principle, which I would state as: “to find new ideas worth testing, gather data quickly and cheaply.”
Nathan, what does “hypotheses fall from the sky like rain” mean? It sounds too passive. It is hard to increase the rate of rainfall; it is much easier, I’m sure, for a scientist to increase the rate at which they generate new ideas worth testing.
My point was that any competent scientist can come up with more hypotheses than he can afford to test carefully, and way more than his or her colleagues can afford to pay attention to, in aggregate. With science, we can offer reasons why somebody else should pay attention to our ravings. This was the problem that al-Haytham first addressed. Ravings were perhaps more prevalent then.
“Any competent scientist can come up with more hypotheses that he can afford to test carefully.” More plausible hypotheses? That is the opposite of my experience. In my field, experimental psychology, it is extraordinarily hard to come up with new treatments that will plausibly have a big effect on the main things we study. What are you basing this statement on?
I’m not an experimental psychologist. Doesn’t it take a long time to test a hypothesis thoroughly? Aren’t there many, many phenomena still lacking definitive explanation?
For many years the only plausible hypotheses were behaviorist. Inventing plausible behaviorist hypotheses is really hard. More generally, the hardest part about inventing hypotheses is to stop pretending to know what you don’t really. Limiting yourself to what grant committee members and journal referees would welcome makes hypothesizing artificially difficult.
One of the other authors of Statistics for Experimenters was my father William Hunter. It is great to keep hearing from people that like the book.