Spycraft, Personal Science, and Overconfidence in What We Know

Edward Jay Epstein‘s newest Kindle book is James Jesus Angleton: Was He Right?. Angleton worked at the CIA most of his career, which spanned the Cold War. He struck some of his colleagues as paranoid: He believed that the CIA could easily contain Russian spies. Colleagues said Oh, no, that couldn’t happen. After his death, it turned out he was right (e.g., Aldrich Ames). At one point he warned the CIA director, “an intelligence [agency] is most vulnerable to deception when it considers itself invulnerable to deception.”

What interests me is the asymmetry of the mistakes. When it really matters, we overestimate far more than underestimate our understanding. CIA employees’ overestimation of their ability to detect deception is a big example. There are innumerable small examples. When people are asked to guess everyday facts (e.g., height of the Empire State Building) and provide 95% confidence intervals for their guesses, their intervals are too short, usually much too short (e.g., the correct answer is outside the intervals 20% of the time). People arrive at destinations more often later than expected than earlier than expected. Projects large and small take longer than expected far more often than shorter than expected. For any one example, there are many possible explanations. But the diversity of examples suggests the common thread is true: We are too sure of what we know.

There are several plausible explanations. One is that it helps groups work together. If people work together toward a single goal, they are more likely to reach that goal and at least learn what happens than if they squabble. Another is the same idea at an individual level. Overconfidence in our beliefs helps us act on them. By acting on them, we learn. Doing nothing teaches less. A third is a mismatch idea: We are overconfident because modern life is more complicated than the Stone-Age world to which evolution adjusted our brains. No one asked Stone-Age people How tall is the Empire State Building? A fourth is that we assume what physicists assume: the distant world follows the same rules as the world close to us. This is a natural assumption, but it’s wrong.

Early in Angleton’s career, he had a very unpleasant shock: He realized he had been fooled by the Russians in a big way for a long time. This led him to try to understand why he’d been fooled. Early in my scientific career, I too was shocked: Rats in Skinner boxes did not act as expected far more often than I would have thought. I overestimated my understanding of them. In a heavily-controlled heavily-studied situation! I generalized from this. If I couldn’t predict the behavior of rats in a Skinner box, I couldn’t predict human behavior in ordinary life. My conclusion was data is more precious than we think. In other words, data is underpriced. If a stock is underpriced, you buy as much of it as possible. I tried to collect as much data as possible. Personal science — studying my sleep, my weight, and so on — was a way to gather data at essentially zero cost. And, indeed, the results surprised me far more than I expected. I could act based on the overconfidence effect but I could not remove it from my expectations.

14 thoughts on “Spycraft, Personal Science, and Overconfidence in What We Know

  1. Your essay contains one of the most beautiful statements I’ve heard in science: “Data is more precious than we think…data is underpriced.” As a scientist, I find that some colleagues love to spend time spinning hypotheses and arguing with each other….when it would take less time to go into the lab and run the experiment. So often the data reveal suprises and confound theories. There is just no substitute for quality data.

  2. I agree. Great provocative post Seth!

    Don’t forget the influence of professional roles and the situations they cast people in, and how this generates overconfidence and the unwillingness to objectively rate the degree of confidence one should have in a prediction, given the relative knowledge available. For example, psychiatry for most of its history has had virtually nothing to offer people suffering mental illness. (It has only a little now.) When we lack knowledge, the void gets filled with theory — vast, rather obscure psychoanalytic theory based upon intuitions, much or most of which, in the light of subsequent evidence, has turned out to be nonsense. But the psychiatrists are faced with desperate patients and their families; they want to have an upper middle class income, like their other medical colleagues and they need to appear authoritative. They can’t just say that they have no idea why someone suffers from schizophrenia or manic depression and that they really have no way to treat it. Who would pay them for that?

    Or look at economists, who cannot perform experiments with history to get causal knowledge and are only recently beginning to rely on psych experiments to see how economic cognition, affect and behavior function. I like Paul Krugman, but he did not know where the economy was headed in 1994, or in 2001, or in 2008; he was fairly clueless. Yet he is very confident that a massive stimulus program will have certain positive effects on the economy. And obviously politics distorts cognition in economics.

    Or nutrition and the over-authoritization that takes place there.

    Professionals have a lot of trouble just saying that they don’t know much… I don’t know how to sort out the effects of economic self interest, situational responses to the strong needs of those they serve, cognitive biases that are just part of the larger culture, and many other things, but it’s a mess…

  3. Thanks, Todd. I am puzzled that this idea isn’t widespread among laboratory scientists, who — judging by your comment — just like me see that even under highly-controlled highly-studied situations it’s still hard to be sure what will happen.

  4. Nancy, experimental psychologists have 100 years of data showing that the results from careful studies of individuals (if those individuals weren’t specially chosen) extrapolate to many people. I don’t extrapolate to “everyone”, just many people. And I do it partly because of all that experimental psych data.

  5. It’s sounded to me as though you say things like “everyone should eat more bacteria” rather than, “eating more bacteria may be a good idea for you, here’s how to test it”.

  6. Nancy, my ideas about the value of fermented food are based on a wide range of data. very little from me. quite different from my other ideas, most of which involve a lot of data from me (e.g., butter). I am sure that we need fermented food to be healthy just as much as we need Vitamin C. I see no other plausible explanation for the wide range of data. I extrapolate to “everyone” partly because there is no group of people who don’t need Vitamin C (or any other vitamin or necessary nutrient, such as iron). That’s why I am sure this requirement (plenty of microbes in your diet) is universal. In large part because of the long history of nutrition. That being said, in a tiny number of cases fermented food has had bad effects. This shouldn’t interfere with seeing the big picture.

  7. Seth,
    See Dan Gilbert’s “Stumbling on Happiness” for (a delightful read and) another explanation of why we’re so bad at predicting what we will feel in the future. We overpredict how difficult and how satisfying future events will be, because we remember distinctive events, not regular ones. Regular events would be more stable data sources for prediction.

Leave a Reply

Your email address will not be published. Required fields are marked *