Dealing With Referee Reports: What I’ve Learned

Alex Tabarrok discusses a proposal to make referee reports and associated material publicly available. I think it would be a good thing because it would make writing a self-serving review (e.g., a retaliatory review) more dangerous. If Reviewer X writes an unreasonable review, the author is likely to complain to the editor. If the paper gets published, the unreasonableness will be highlighted — and nominal anonymity may not be enough to hide who wrote it. On the other side, as a reader, it would be extremely educational. You could learn a lot from studying these reports and the replies they generated, especially if you’re a grad student. I would like to know why some papers got accepted. For example, my Tsinghua students pointed out serious flaws in published papers. Were the problems noted by reviewers and ignored, or what?

My experience is that about 80% of reviews are reasonable. Many of those are ignorant, but that’s no crime. (A lot of reviewers know more than me.) The remaining 20% seem to go off the rails somehow. For example, Hal Pashler and I wrote a paper criticizing the use of good fits to support quantitative models. The first two reviewers seemed to have been people who did just that. Their reviews were ridiculous. Apparently they thought the paper shouldn’t be published because it might call their work into question. A few reviews have appeared to be retaliation. In the 1990s, I complained to the Office of Research Integrity that a certain set of papers appeared to contain made-up data. (ORI sent the case to the institution where the research was done. A committee to investigate did the shallowest possible review and decided I was wrong. I learned my lesson — don’t trust ORI — which I applied to the Chandra case.) After that allegation, I got stunningly unfair reviews from time to time, presumably from the people I accused. A small fraction of reviews (5%?) are so lazy they’re worthless. One reviewer of my long self-experimentation paper said it shouldn’t be published because it wasn’t science. The author (me) should go do some real science.

The main things I’ve learned about how to respond are: 1. When resubmitting the paper (revised in light of the reviews), go over every objection and how it was dealt with or why it was ignored. Making such a list isn’t very hard, it makes ignoring a criticism much easier (because you are explicit about it), and editors like it. This has become common. 2. When a review is unreasonable, complain. The theory-testing paper I wrote with Hal is one of my favorite papers and it wouldn’t have been published where it was if we hadn’t complained. Another paper of mine said that some data failed a chi-square test many times — suggesting that something was wrong. One of the reviewers seemed to not understand what a chi-square test was. I complained and got a new reviewer.

I’m curious: What have you learned about responding to reviewers?

12 thoughts on “Dealing With Referee Reports: What I’ve Learned

  1. Hi Seth,

    When Aaron and I presented our clinical poster — “The Role of Nutrition in the Epigenetics of Health: a Patient-Driven Approach — at UCLA a few weeks back and listed our two individual e-patient self-experimentation cases as content, one academic looked over our poster and then told me, “This is not science. There’s no real science here. This is just two cases.”

    Clinical medicine is founded on the patient narrative, the study of individual patient cases. Somehow this spirit has not translated into the empirical side of medical inquiry. Clinicians engaged with patients in self-experimentation, I suspect, is one practical way to fill that gap.

    Best,

    Brent

  2. A discussion of some of your non-mainstream theories (e.g., fermented foods, Shangri-La diet) viewed through the lens of that paper might make a good blog post or series of posts.

  3. “Viewed through the lens of that paper”? I’m not sure what you mean. The Shangri-La Diet derives from Example 10 of that paper. My interest in fermented foods derives partly from Examples 1-5 of that paper, which suggested that modern life lacks lots of stuff we need to be healthy.

  4. Sorry, that was dumb: there’s more than one paper linked in the post, and I didn’t specify which one! Yeesh. I meant the paper criticizing the use of good fits to support quantitative models. As I recall, it doesn’t just criticize, it sets out guidelines for what ought to count as good support. That’s the lense I meant.

  5. This kind of reminds me of how you often learn a lot about an issue from the Wikipedia “discuss” page that you don’t get in the main article.

    I think bringing some of the approaches used in formal debate to academic squabbling could only benefit mankind.

  6. A few years back, a retiring editor of a journal wrote in his goodbye letter that the most enjoyable thing to him while he was editor, and what he would miss the most, were the comments that the reviewers gave to him, the ones not to be shared with the authors.
    I once worked on a paper as a coauthor down on the list that took six rounds of revisions and objections by mainly one reviewer. The lead author on the paper, who had written a lot in the area, knew who the dissenting reviewer was, just by how s/he wrote. I often wondered what the editor thought about letting it go on that long.

  7. One last story. I found a brief article on review etiquette and courtesy written by a famous psychologist a lot of years ago, read it, and found it very inspiring and helpful. I mentioned it to my advisor, describing to him how good an essay it was. He told me that 15 years prior to that he had received a review from that same individual that was the rudest one he’d ever received.

  8. Seth,

    In my comments on MR I noted that the most intense complaints about reports resemble yours, about bad reports that lead to a paper being rejected. These will not be revealed by publishing the reports of papers that are accepted. Your Xinhua students may have found errors in published papers (in which case there is an opportunity to write a publishable comment), but do not expect editors under such a system to publish papers with reports whose critical comments have not yet been responded to.

    I argued that such a setup will make it less likely that editors will do the brave thing and publish papers that have outstanding critical comments in any report. Some of the most cited papers I have published in JEBO have had exactly that sort of situation, reports from referees or even associate editors opposing publication. Maybe publishing these would be interesting, but most editors are chicken shits who would not publish such papers so as to avoid any embarrassment on their part. This proposal would only lead to more mediocrity than we already see in published journal articles.

  9. BTW, allow me to modify my remarks a bit. I do not think that “most” editors are “c……n s…s.” However, enough are that the system you support would on net tend to have the effect I warn of, that we would see more mediocrity and less innovation in published journal articles.

  10. Barkley, I see your point. That makes sense. But I wonder if it would be like cameras on reality shows: After a while the contestants forget about them. Or at least that is my impression, given how much the contestants embarrass themselves. And perhaps the possibility that their comments might become public would lead to better behavior by editors.

  11. Seth,

    Again, the worst behavior by referees is when they are recommending rejections or are simply sitting on papers. This stuff will not get revealed by publishing reports for accepted papers, generally speaking, unless the editor has overruled them or the author has kowtowed to ridicuous demands (“cite me, me, me, me…!”). Nobody is going to publish unpublished papers to show the comments on them, nobody.

Leave a Reply

Your email address will not be published. Required fields are marked *