And how do you learn what matters?
When I was a grad student, I read Stanislav Ulam’s memoir Adventures of a Mathematician. I was impressed by something Ulam said about John von Neumann: that he grasped the difference between the trunk of the tree of mathematics and the branches. Between core issues and lesser ones. Between what matters more and what matters less. I wanted to make similar distinctions within psychology. Nobody talked about this, however. Not even other books.
Some research will be influential, will be built upon. Some won’t. To put it bluntly, some research will matter, some won’t. I once thought of teaching a graduate course where students learn to predict how many citations an article will receive. You take a 10-year-old journal issue, for example, and try to predict how many citations each article will receive. I like to think it would have been a helpful class: The key to a successful scientific career is writing articles that are often cited. I even had a title: “What Will You Do After You Stop Imitating Your Advisor?”
When I was a grad student the short answer to “what matters?” in experimental psychology was clear enough:
1. New methods. The Skinner box, for example, was a new way to study instrumental learning. Skinner didn’t discover or create the first laboratory demonstration of instrumental learning; he simply made it easier to study.
2. New effects. New cause-and-effect linkages. For example, John Garcia discovered that if you make a rat sick after it experiences a new flavor it will avoid foods with that flavor.
My doctoral dissertation was about a new way to study animal timing.
A few months ago I had coffee with Glen Weyl, a graduate student in economics at Princeton. We discussed his doctoral research, which is about how to test theories. One of Glen’s advisors had told him about a paper by Hal Pashler and me on the subject. Hal and I argued that fitting a model to data is a poor way to test the model because there is no allowance for the model’s flexibility. The first reviewers of our paper didn’t like it. “You don’t realize how hard it is to find a model that fits,” one of them wrote.
Glen’s interest in this question began during a seminar in Italy, when he realized the speaker was more or less ignoring the problem. The speaker was comparing how well two different theories could explain the same data without taking into account their different amounts of flexibility. Glen’s thesis proposes a Bayesian framework that allows you to do this. His main example uses data of Charness and Rabin from choice experiments. (Matt Rabin is a MacArthur Fellow.) Taking flexibility into account, he reaches a different conclusion than they did.
I wondered how Glen decided this was important. (It’s a method, yes, but a highly abstract one.) I asked him. He replied:
Sadly, despite my interest in the history of economic thought, I don’t have a lot of insight about why I came upon these thoughts. But one thing: my interests are very interdisciplinary . . . My work is based on drawing connections between economics, philosophy of science, and computer science (and meta-analysis from psychology and bio-statistics). Most of my work takes this form: as you’ll see on my website, I’ve used theoretical insights from economics and computer science as well as evidence from neuroscience, psychology and biology to critique the individualist foundations of liberal rights theory; I’ve used ideas from decision theory to lay firmer foundations for goals set out by computer scientists designing algorithms; I’ve used tools from information theory to instantiate insights from psychology to help understand the design of auctions; and I’ve used computational neuroscience to model biases in economic information processing. Broad interests are hard to have, because they limit the time for learning a particular area in depth, but I prefer to read broadly and draw connections rather than to read deeply and chip away at open questions.
That was interesting. I read broadly, and so does Hal, who knows more about the philosophy of science than I do. I wrote to Glen:
The usual comment about interdisciplinary knowledge is that it’s good because you can bring ideas from one area, including solutions and methods, to solve problems in another area. . . . But maybe it’s also good because by learning about different areas you absorb a range of different value systems and this makes you less sensitive to fads (which vary from field to field), more sensitive to longer-lasting and more broadly-held values.
The more trees you know, the easier it is to see the forest.
Evaluating new product ideas.