Why Psychologists Don’t Imitate Economists

Justin Wolfers, an economist, via Marginal Revolution:

When I watch and speak with my friends in psychology, very little of their work is about analyzing observational data. It’s about experiments, real experiments, with very interesting interventions. So they have a different method of trying to isolate causation. I am certain that we have an enormous amount to learn from them. But I am curious why we have not been able to convince them of the importance of careful analysis of observational data.

By “careful analysis of observational data” I think Wolfers means the way economists search within observational data for comparisons in which the factor of interest is the only thing that changes (which is why he says “isolate” rather than “infer”). He’s right — it really is a methodological innovation that psychologists are unfamiliar with. It lies between ordinary survey data and experiments.

Here’s why I think this innovation has had (and will have) little effect on psychology:

1. Most psychology professors are bad at math. They still use SPSS! Which is terrible but they think R is too difficult. Economics papers are full of math. That is part of the problem. Math difficulty also means they have trouble with basic statistical ideas. When analyzing data, they’re afraid they’ll do the wrong thing. For example, most psychology professors don’t transform their data. It wasn’t in some crummy textbook so they are afraid of it. Lack of confidence about math makes them resistant to new methods of analysis. Experimental data is much easier to analyze than observational data. You don’t need to be good at math to do a good job. So they not only cling to SPSS, they cling to experimental data.

2. Psychology studies smaller entities than economics. Study of the parts often influences study of the whole; the influence rarely goes the other way. This is why, when it comes to theory, physics will always have a much bigger effect on chemistry than vice-versa, chemistry a much bigger effect on biology than vice-versa. Method is different than theory but if you aren’t reading the papers — and physicists don’t read a lot of chemistry — you won’t pick up the methods.

3. There is a long history of longitudinal research in psychology. Studying one or more groups of children year after year into adulthood. The Terman Genius project is the most famous example. I find these studies unimpressive. They haven’t found anything I would teach in an introductory psychology class. I think most psychologists would agree. This makes observational data less attractive by association.

4. Like everyone else, psychologists have been brainwashed with “correlation does not equal causation”. I have heard many psychology professors repeat this; I have never heard one say how misleading it is. To the extent they believe it, it pushes them away from observational data.

5. Psychologists rarely use observational data at all. To get them to appreciate sophisticated analysis of observational data is like getting someone who has never drunk any wine to appreciate the difference between a $20 wine and a $40 wine.

18 thoughts on “Why Psychologists Don’t Imitate Economists

  1. Working from nearly zero information, I wonder if it’s not harder to get good observational data that’s of interest to psychologists. I gather economists do a lot of work with big, publicly available datasets collected by agencies whose job it is to collect them. If I knew about a large dataset that included variables of psychological interest, I’d be interested.

    There are exceptions — my ex-officemate Gary Lupyan has a recent PLoS ONE paper analyzing how morphology relates to number of speakers in a big database of languages. And it’s obviously possible that many of these datasets exist, and I’m unaware of them because of this very methodological bias.

    (Unrelated, I have a bachelor’s in computer science and don’t touch SPSS if I can avoid it — but R is gross in its own way, and I’m getting more and more frustrated with how much effort it takes to do simple things. I’m starting to think that Python with numpy and matplotlib may be the open source data analysis and visualization solution, not that I’ve tested this hypothesis.)

  2. I know quite a few psychologists who absolutely love SAS. I work with SAS exclusively, both for my psychologicial and economic analyses.

  3. Alex, I read the article but didn’t find any impressive results in it. Maybe you noticed how no small number of results were emphasized. Instead, 20-odd conclusions were described briefly.

    Matt, the big databases you refer to often measure things that depend on behavior, which psychologists study. For example, income reflects behavior. Sometimes these big databases include something very close to behavior, such as highest degree achieved. But a psychologist would think that income and highest degree achieved are poor measures of behavior because they are likely to depend on dozens of things, including many a psychologist wouldn’t care about (e.g., where you live). When you can’t measure all the things that might affect your dependent variable, and it’s observational (non-experimental) data, you get very nervous about concluding cause and effect.

  4. Sam, yeah, you’re right. And some of the personality work does impress me — Frank Sulloway’s, for example. That would be one way that these economic methods could begin to influence psychology.

  5. This fixation of economists doing comparative analysis of their discipline with others invariably proves the superiority of economics over all things. “We’re the awesome realists” they say. And yet the gulf ignored is analysis of what is past with useful observations for the future. And the disciplines ignored ethics and philosophy.

  6. Practicing psychologists are people who work with one (or sometimes a few, as in group therapy) person at a time. This is so much taken for granted that the idea that one ought to look at large numbers of people seems foreign to them. I suppose this outlook tends to infect academics as well, especially as many of them do therapy too.

    I do some work with abused children. I constantly run across therapists who are willing to offer firm opinions about someone’s character or intentions based solely on discussions with that one person. Whenever I point out that several other people tell very different stories from what they’ve heard, they mostly just look at me funny. It’s sort of the same thing as if one were to conduct a trial by asking questions only of the defendant. This mindset doesn’t encourage the sort of thinking that economists do. It also doesn’t tend to lead to accurate predictions: studies of how accurate therapists are at prediction things like dangerousness show them about on a par with coin flips.

  7. What’s going here is that by and large economists and psychologists have different objectives.
    When psychology began as a science in the 19th century it defined itself as the science that studied — experimentally — conscious experience. It asked questions such as “How many simple ideas can be held in consciousness at the same time?” (answer 3-4 clearly and distinctly). Observational data sets aren’t useful here.
    Even though psychology is now the science of behavior, we’re still interested in elucidating internal mechanisms that cause behavior, whether conscious, nonconscious, or physiological. As an example, let me take Friedma’n’s famous billiards player. He was happy the explain the player’s behavior from the outside as if the player were a master of physics. A psychologist would want to look inside: What thoughts does he have as he lines up a shot? What sensory cues does he need to integrate to make an effective shot: light reflectance off the billiard balls? kinesthetic cues from the position of his body and hand as he holds the pool cue? kinesthetic feedback from the movement of his hand as he strikes the ball? sensory integration of the visually perceived movement of the ball with the feedback from striking the ball? Also: How does one move from being a novice billiard ball player to an expert one? Is there explicit instruction? Does one learn by observation? If the latter, how does the learner figure out what aspects of an expert’s movements are relevant and which are not? What’s happening at the neural level as the player contemplates and executes a shot? Because these are questions about internal mechanisms, observational data sets are not helpful. One needs careful experimental studies in which one might vary things such as light reflectance off the balls.
    In this connection, let me note that one can find very sophisticated mathematical work in psychology, it will just be in areas that economists are unlikely to encounter, such as psychophysics (Fechner’s law looks a lot like a utility curve, by the way) or in work in neural network theory or dynamical systems.
    Finally, I’d like to mention that the very first work I know of in behavioral economics never seems to come up in economists’ discussions of it, which tend to start with Simon and Kahnemann/Tversky. It was work back in the 1970s by Skinnerian radical behaviorists on choice behavior in animals, pioneered by Richard Herrnstein (of the Bell Curve). Skinnerians don’t care about internal mechanisms, either.
    If we were all social scientists from Mars studying the movements of “automobiles” on Earth’s highways, you would collect large observational data sets about traffic flows. Psychologists would want to capture several vehicles and look inside to see how they work.


  8. Psychology studies smaller entities than economics. Study of the parts often influences study of the whole; the influence rarely goes the other way. This is why, when it comes to theory, physics will always have a much bigger effect on chemistry than vice-versa, chemistry a much bigger effect on biology than vice-versa. Method is different than theory but if you aren’t reading the papers — and physicists don’t read a lot of chemistry — you won’t pick up the methods.

    I share the intution… has anybody fleshed out the implications?

  9. I wonder if there are studies that have been redone using the more sophicated data techniques of economists. My question is: Does it make a difference, not for one of two studies, but for the thrust of the basic conclusions in an area?

    This would be an excellent opportunity to examine the various ways smart researchers can overcome the limitations of their respective empirical methods.

    It won’t be done, of course — so economists can continue to feel superior.


  10. Psychologists rarely use observational data at all.

    Social psychologists use observational data — of a particular sort — all the time. They do it when they analyze mediation (or “causal mechanisms”) in the style of Baron and Kenny (1986). Their data on mediators are almost always observational: the mediators are measured but not manipulated. The analyses of mediation in these cases should be thought of as observational studies, with all of the difficulties that observational studies entail.


  11. I wonder if there are studies that have been redone using the more sophicated data techniques of economists. My question is: Does it make a difference, not for one of two studies, but for the thrust of the basic conclusions in an area?

    Sometimes it does:

    How Choice Affects and Reflects Preferences: Revisiting the Free-Choice Paradigm
    M. Keith Chen and Jane L. Risen
    JPSP 2010

    Excerpts from the abstract:

    After making a choice between two objects, people evaluate their chosen item higher and their rejected item lower (i.e., they “spread” the alternatives). Since Brehm’s (1956) initial free-choice experiment, psychologists have interpreted the spreading of alternatives as evidence for choice-induced attitude change. It is widely assumed to occur because choosing creates cognitive dissonance [… but] the free-choice paradigm (FCP) will produce spreading, even if people’s attitudes remain unchanged […] We show this, first, by proving a mathematical theorem that identifies a set of conditions under which the FCP will measure spreading, even absent attitude change. We then experimentally demonstrate that these conditions appear to hold […] The results suggest a reassessment of the free-choice paradigm, and perhaps, the conclusions that have been drawn from it.

  12. Seth, out of curiosity, can you list a few important points that have come to be generally accepted as a result of the “innovation” you extol in the beginning, which psychologists have foolishly ignored. I’m in neither field, but it strikes me that there are papers showing “evidence from this” and “evidence from that” supporting various hypotheses, but on how many important questions has a general consensus been reached as a result of these studies? Many? More than longitudinal studies employed by psychologists?

  13. I think you have it a bit backwards. Psychologists won’t start using large amounts of observational data because experiments are fundamentally better at answering questions. Imagine trying to do observational chemistry or observational physics! While economics has historically had to rely on imperfect observational data, I think more and more economists will start embracing experimental approaches. I think some of your own points actually are pretty good arguments for this line of thinking.

    “Experimental data is much easier to analyze than observational data.” This is pretty much the point. For any given effect, you will have more power to detect the effect in a well controlled experiment than in observation data with uncontrolled variables. Sometimes you can correct for those uncontrolled variables, but almost always it is an approximation.

    “Study of the parts often influences study of the whole; the influence rarely goes the other way.” With experiments, it is much easier to get at multiple levels of analysis, with observational data, there is a lower limit on how small you can go (usually related to privacy laws and the inability to record all decisions/environments). Some of the really large scale stuff will probably have to remain observational for similar technical limits, but experiments of groups of up to a few hundred people are feasible.

    Also, correlation isn’t causation. I agree it can sometimes be used dismissively, but it comes back to a basic point. In order to determine the direction of relationship unambiguously, you need to externally perturb one of the variables and measure the influence on the other variable. That is an experiment. There are sometimes natural experiments, but they can never be as well controlled as a deliberate experiment.

    I will agree on one point however. Psychologists are chronically bad at math, and this holds the field back. Economists can add a lot to psychology if they bring mathematical rigor to the field, and learn how to design good experiments. I think the biggest road block for most economists in designing good experiments is know what tools they have at their disposal. Psychological methods have a long history and there is a lot of great paradigms to use in there.

    One field that jumps out to me is the study of working memory. There are a lot of interesting findings there that economists could use to start predicting individual and maybe even group behavior. Other topic to learn about would probably be cognitive control/executive function (which desperately needs some better models and rigor), and attention, which relates to literally everything humans do.

  14. Oh, and an interesting set of longitudinal data related to psych? I’d say there’s lot in child development. Probably most likely to make it into a psych 101 course would be maternal diet and it’s effect upon psychological outcome of the child.

Leave a Reply

Your email address will not be published. Required fields are marked *