Kahneman Criticizes Social Psychologists For Replication Difficulties

In a letter linked to by Nature, Daniel Kahneman told social psychologists that they should worry about the repeatability of what are called “social priming effects”. For example, after you see words associated with old age you walk more slowly. John Bargh of New York University is the most prominent researcher in the study of these effects. Many people first heard about them in Malcolm Gladwell’s Blink.

Kahneman wrote:

Questions have been raised about the robustness of priming results. The storm of doubts is fed by several sources, including the recent exposure of fraudulent researchers [who studied priming], general concerns with replicability that affect many disciplines, multiple reported failures to replicate salient results in the priming literature, and the growing belief in the existence of a pervasive file drawer problem [= studies with inconvenient results are not published] that undermines two methodological pillars of your field: the preference for conceptual over literal replication and the use of meta-analysis.

He went on to propose a complicated scheme by which Lab B will see if a result from Lab A can be repeated, then Lab C will see if the result from Lab B can be repeated. And so on. A non-starter, too complex and too costly. What Kahneman proposes requires substantial graduate student labor and will not help the grad students involved get a job — in fact, “wasting” their time (how they will see it) makes it harder for them to get a job. I don’t think anyone believes grad students should pay for the sins of established researchers.

I completely agree there is a problem. It isn’t just social priming research. You’ve heard the saying: “1. Fast. 2. Cheap. 3. Good. Choose 2.” When it comes to psychology research, “1.True. 2. Career. 3. Simple. Choose 2.” Overwhelmingly researchers choose 2 and 3. There isn’t anything wrong with choosing to have a career (= publish papers) so I put a lot of blame for the current state of affairs on journal policies, which put enormous pressure on researchers to choose “3. Simple”. Hardly any journals in psychology publish (a) negative results, (b) exact replications, and (c) complex sets of results (e.g., where Study 1 finds X and apparently identical Study 2 does not find X). The percentage of psychology papers with even one of these characteristics is about 0.0%. You could look at several thousand and not find a single instance. My proposed solution to the problem pointed out by Kahneman is new journal policies: 1. Publish negative results. 2. Publish (and encourage) exact replications. 3. Publish (and encourage) complexity.

Such papers exist. I previously blogged about a paper that emphasized the complexity of findings in “choice overload” research — the finding that too many choices can have bad effects. Basically it concluded the original result was wrong (“mean effect size of virtually zero”), except perhaps in special circumstances. Unless you read this blog — and have a good memory — you are unlikely to have heard of the revisionist paper. Yet I suspect almost everyone reading this has heard of the original result. A friend of mine, who has a Ph.D. in psychology from Stanford, told me he considered Sheena Iyengar, the researcher most associated with the original result, the greatest psychologist of his generation. Iyengar wrote a book (“The Art of Choosing”) about the result. I found nothing in it about the complexities and lack of repeatability.

Why is personal science important? Because personal scientists — people doing science to help themselves, e.g., sleep better — ignore 2. Career and 3. Simple.

Assorted Links

 

The Terry Deacon Affair

Terrence Deacon is a professor of anthropology at the University of California Berkeley — at the moment, chair of the anthropology department. (Deacon, like me, is interested in the evolution of language.) How unfortunate for the department, especially his graduate students, that he has recently been accused of using vast amounts of another person’s work without giving her credit. It isn’t easy to see the overlap, maybe because Deacon is a terrible writer (“by far the most unreadable book I have ever encountered” said a reviewer of one of his books), but there appears to be no doubt of the similarity and Deacon’s exposure to the work he is accused of not citing. Deacon says he doesn’t remember it.

When I brought unquestionable examples of plagiarism by Leslie Iversen, an Oxford professor, to the attention of Julie Maxton, the Registrar of Oxford University, she dismissed them (“honest error” — appearing to say that Iversen didn’t know that word-by-word copying is wrong). In this case there is no word-by-word copying but the failure to cite is far more upsetting to the persons not cited. What may have been copied is more abstract (“deeper”, you could say) and therefore more important.

At first, the complaint was dismissed. “I have concluded that the information available to me does not warrant appointment of an Investigative Officer under our campus faculty disciplinary procedures. The conduct you have alleged would not constitute a violation of the University of California’s Faculty Code of Conduct,” wrote Janet Broughton, Vice Provost, on May 27, 2011.

Later (December 12, 2011), Robert Price, Associate Vice Chancellor for Research, responded, ” The fact that certain concepts or phrases used by Dr. Deacon in the article you provided are the same or similar to concepts that appear in chapters from your book is not evidence of plagiarism, as these concepts may not be unique to your work. Perhaps the way you use these concepts is unique, which then would constitute plagiarism.” This understates the evidence, which is a long series of similarities.

Finally, the fact that outsiders find the claims (of failure to give credit) credible appears to have convinced Price that something must be done. “The continuing public dispute that your claims have generated lead me to believe that such an investigation [of the claims] is necessary in order to “clear the air””, wrote Price. After being successfully pressured to investigate, Price, a political science professor, in a September 3, 2012 letter reveals a lack of understanding of social pressure:

I wish to make crystal clear to you, your associates, and to all those to whom you are communicating that the University of California, Berkeley has not found that Professor Deacon has engaged in any form of research misconduct. The sole reason for undertaking an investigation are the claims made by you and your associates. [Contradicting what he said earlier — that the “continuing public dispute” led to the investigation.] . . . The idea that you would use my communications with you [“use” in the sense of posting on a website] and the ongoing examination of your allegations by UC Berkeley as part of what increasing strikes me as a vendetta against Professor Deacon is reprehensible.

My letter to Alicia Juarrero [who complained] ends with this paragraph: “Our University policy on research misconduct, as well as the federal regulation on which it is based, require that all stages in the research misconduct investigation procedure are treated as strictly confidential. (UCB “Research Misconduct: Policies, Definitions and Procedures,” item IC and Federal Regulation 45CFR93.108). I expect that you will adhere to this requirement.” Rather than adhere to the stated requirement of confidentiality, Dr. Juarrero shared the letter with you and you, in turn, posted it on your website. What purpose is being served other than to make it appear that Deacon is guilty of something before even a single one of your claims has been validated? This sort of tactic will be familiar to those who remember the history of Joe McCarthy.

“What purpose is being served”? Uh, making the accusations harder to ignore? As for McCarthy, he made accusations without supplying evidence (“In this envelope [which he didn’t open] I have the names of 80 Communists in the State Department”). That is not happening here.

Why Alicia Juarrero Got Mad at Terry Deacon. New Terry Deacon Website.

Acne: Reality is Not a Morality Tale

Someone named Red Fury made an interesting comment on my Boing Boing article about acne:

I had acne on/off for years. . . . In my mid-thirties, I tried the Retin-A at night, antibiotic gel for day regimen for about 2 years – no effect. . . . Then, I was talking to a co-worker whose daughter was taking ‘modeling classes’ to become a teen model. She casually mentioned her acned daughter had to give up rice, potato chips, and bread, all of which are high-glycemic index foods. My quack-radar went off, and I looked around for something scientific behind that advice. https://www.ajcn.org/content/86…

Huh. I guess those nutrition-bashing dermatologists actually did a study and published the scientific results in a peer-reviewed journal. . . . My acne disappeared completely as soon as I eliminated rice and potatoes.

He finds a study that supports the casual advice, he follows the advice, his acne disappears. By convincing him to follow the advice, the scientific study helped him get rid of his acne. Which is impressive.

The interesting twist is that the study was published twice, clearly breaking the rules. Bad scientists! Who did something really good.

Assorted Links

Thanks to Anne Weiss.

Assorted Links

Thanks to Hal Pashler and Robin Barooah.

Prize Fight: The Race and Rivalry to be First in Science by Morton Meyers

Prize Fight: The Race and Rivalry to be First in Science (2012) by Morton Meyers (copy sent me by publisher) is about battles/disagreements over credit, often within a lab. Jocelyn Bell noticed the first quasar — how much credit does she deserve relative to her advisor, Anthony Hewish, who built the structure within which she worked? (Not much, said Bell. “I believe it would demean Nobel Prizes if they were given to research students.”) The structure and subtitle of the book make little sense — there is a chapter about how science resembles art and a chapter about data fabrication, for instance, and nothing about races or being first. The core of the book is two stories about credit: for the discovery of streptomycin, the first drug effective against tuberculosis, and for the invention of MRI (magnetic resonance imaging). Meyers is a radiology professor and a colleague of one of the inventors of MRI.

I liked both stories. I find it hard to learn anything unless there is emotion involved. Both stories are emotional — people got angry — which made it easy to learn the science. Streptomycin was found by screening dirt. It was already known that dirt kills microbes. The graduate student who made the discovery was indeed a cog in a machine but later he was mistreated and got angry and sued. The first MRI-like machine was built by a doctor named Raymond Damadian, who was not one of the recipients of the Nobel Prize given out for its invention. He had good cause to be furious. The otherwise good science writer Horace Freeland Judson wrote an op-ed piece about it (“No Nobel Prize for Whining”) that ended “His behavior stands in stark and elegant contrast to the noisy complaining of Raymond Damadian”. To name-call (“whining”, “noisy”) in a New York Times op-ed is to suggest your case is weak.

I have had a related experience. When I was a graduate student, at Brown University, I did experiments about cross-model use of an internal clock. Do rats use the same clock to measure the duration of sound and the duration of light? (Yes.) I got the idea from human experiments about cross-modal transfer. By the time my paper (“Cross-Modal Use of an Internal Clock”) appeared, I was an assistant professor. A few months after it was published, I went back to Brown to visit my advisor, Russell Church. On the day of my visit, he had just received a new issue of the main journal in our field (Journal of Experimental Psychology: Animal Behavior Processes — where my article appeared). It was in a brown wrapper on his desk. I opened it. The second article was “Abstraction of Temporal Attributes” by Warren Meck and Russell Church. (Meck was a graduate student with Church.) I didn’t know about it. It was based on my work. The first experiment was the same (except for trivial details) as the first experiment of my article. The introduction did not mention me. I leafed through it. Buried in the middle it said “This result replicates previous reports from our laboratory (Meck & Church, in press; Roberts, 1982).”

I was angry. Why did you do this? I asked Church. “To make it seem more important,” he said. I consoled myself by thinking how bad it looked (on Church’s record). I never visited him, and almost never spoke to him, again. Years later I was asked to speak at a conference session honoring him. I declined. What he did amounted to rich (well-established) stealing from poor (not established) and jeopardized my career. When my article appeared, I didn’t have tenure. It was far from certain I would get it. I hadn’t written many papers. If you read both papers (Meck and Church, and mine), you could easily be confused: Who copied who? This confusion reduced the credit I got for my work and reduced my chance of getting tenure. Church surely knew this. Failure to get tenure could have ended my career.

 

 

Assorted Links

Thanks to Craig Fratrik and Bryan Castañeda.

What Motivates Scientists? Evidence From Cancer Research

A friend of mine who worked in a biology lab said the grad students and post-docs joked about the clinical-relevance statements included at the end of papers and grant proposals: how the research would help cure cancer, retard aging, and so on. It was nonsense, they knew, but had to be included to help funding agencies justify their spending.

Principal investigators never say such things. Are they wiser than grad students and post-docs? Fortunately for the rest of us, actions speak louder than words. An action — actually, a lack of action — that suggests that P.I.’s know their research has little connection to curing cancer, etc., is 50 years of widespread indifference by cancer researchers to the possibility that their research uses a mislabeled cell line. For example, you think you are studying breast cancer cells but you are actually studying melanoma cells. A recent WSJ article says that the problem was brought to the attention of cancer researchers in 1966 but they have been “slow” to do anything about it:

University of Washington scientist Stanley Gartler warned about the practice [of using mislabelled cells] in 1966. He had developed a pioneering technique using genetic markers that would distinguish one person’s cells from another. Using the process, he tested 20 of the most widely used cancer cell lines of the era. He found 18 of the lines weren’t unique: They were Ms. Lacks’ cervical cancer. . . . A decade after publication of his findings Gartler attended a conference and introduced himself to a scientist. Dr. Gartler recalled the man told him, “‘I heard your talk on contamination. I didn’t believe what you said then and I don’t believe what you said now.’ “

What he meant was: I ignored what you said. Yet it costs only $200 to check your cell line. Fifty-plus years later, mislabeled cell lines remain a big problem. “Cell repositories in the U.S., U.K., Germany and Japan have estimated that 18% to 36% of cancer cell lines are incorrectly identified,” says the article. This indicates considerable indifference to the possibility of mislabeling.

If you truly wanted to cure breast cancer, would you spend $200 (out of a grant that might be $100,000/year) to make sure you were using a relevant cell line? Of course. If you were trying to cure your daughter’s breast cancer or your mother’s melanoma, would you make absolutely sure you were using the most relevant cell line? Of course. I conclude that a large fraction of cancer researchers care little about the practical value of their research.

I believe that one reason my personal science found new solutions to common problems (obesity, insomnia, etc.) is that my overwhelming goal was to find something of practical value. I wasn’t trying to publish papers, impress my colleagues, renew a grant, win awards, and so on. No doubt many cancer researchers want to cure cancer. But this 50-year-and-not-over chapter in the history of their field suggests that many of them have other more powerful motivations that conflict with curing cancer.

Thanks to Hal Pashler. Hal’s work on “ voodoo neuroscience” is another instance where the guilty parties, I believe, knew they might be doing something wrong but didn’t care.

Assorted Links

Thanks to Peter Spero and Hal Pashler.