Assorted Links

Thanks to Vic Sarjoo, Anne Weiss, and Marian Lizzi.

Dealing With Referee Reports: What I’ve Learned

Alex Tabarrok discusses a proposal to make referee reports and associated material publicly available. I think it would be a good thing because it would make writing a self-serving review (e.g., a retaliatory review) more dangerous. If Reviewer X writes an unreasonable review, the author is likely to complain to the editor. If the paper gets published, the unreasonableness will be highlighted — and nominal anonymity may not be enough to hide who wrote it. On the other side, as a reader, it would be extremely educational. You could learn a lot from studying these reports and the replies they generated, especially if you’re a grad student. I would like to know why some papers got accepted. For example, my Tsinghua students pointed out serious flaws in published papers. Were the problems noted by reviewers and ignored, or what?

My experience is that about 80% of reviews are reasonable. Many of those are ignorant, but that’s no crime. (A lot of reviewers know more than me.) The remaining 20% seem to go off the rails somehow. For example, Hal Pashler and I wrote a paper criticizing the use of good fits to support quantitative models. The first two reviewers seemed to have been people who did just that. Their reviews were ridiculous. Apparently they thought the paper shouldn’t be published because it might call their work into question. A few reviews have appeared to be retaliation. In the 1990s, I complained to the Office of Research Integrity that a certain set of papers appeared to contain made-up data. (ORI sent the case to the institution where the research was done. A committee to investigate did the shallowest possible review and decided I was wrong. I learned my lesson — don’t trust ORI — which I applied to the Chandra case.) After that allegation, I got stunningly unfair reviews from time to time, presumably from the people I accused. A small fraction of reviews (5%?) are so lazy they’re worthless. One reviewer of my long self-experimentation paper said it shouldn’t be published because it wasn’t science. The author (me) should go do some real science.

The main things I’ve learned about how to respond are: 1. When resubmitting the paper (revised in light of the reviews), go over every objection and how it was dealt with or why it was ignored. Making such a list isn’t very hard, it makes ignoring a criticism much easier (because you are explicit about it), and editors like it. This has become common. 2. When a review is unreasonable, complain. The theory-testing paper I wrote with Hal is one of my favorite papers and it wouldn’t have been published where it was if we hadn’t complained. Another paper of mine said that some data failed a chi-square test many times — suggesting that something was wrong. One of the reviewers seemed to not understand what a chi-square test was. I complained and got a new reviewer.

I’m curious: What have you learned about responding to reviewers?

Visible Big vs. Invisible Small

In the current New Yorker, James Surowiecki writes:

The bailout of the auto industry, after all, was as unpopular as the bailout of the banks, even though it was much tougher on the companies (G.M. and Chrysler went bankrupt; shareholders were wiped out, and C.E.O.s pushed out), and even though the biggest beneficiaries of the deal were ordinary autoworkers. You might have expected a deal that helped workers keep their jobs to play well in a country spooked by ballooning unemployment. Yet most voters hated it.

Yes, rewarding failure doesn’t play well. The voters were right. The same money that was used to give a few giant companies a second (or third) chance could have been used to give many thousands of very small companies a first chance. It could have been used to help many thousands of people start new small businesses (often one-person businesses) or keep their new small business afloat. All those small businesses would have provided plenty of jobs. and they would have had a far more promising future, far more room for growth, than the Big Three, being both far more diverse and having not already failed. The many thousands of people who wanted to start small businesses were unable to get together and make themselves visible, so the failure of government to help them went unnoticed. Their diversity was economic strength but political weakness.

It’ isn’t surprising things happened as they did — the Big Three (not to mention Wall Street) were bailed out, small businesses were ignored — but it is an indication of how poorly our economy is managed in the most basic ways. I’m not even an economist and I understand this simple point. Bernanke and Summers do not.

It’s easy for me to understand because the same thing happens in science. Government support of research is a good idea, but the money is misspent, in the same way. Grant support goes to a few large projects — generally to people who have already failed (to do anything useful) — rather than to a large number of small projects that haven’t yet failed. The way to support innovation is to place many small bets not a few big ones. That’s one thing I learned from self-experimentation, which allowed me to place many small bets.

Scholarly Research Exchange

Today I got an email inviting me to contribute to a journal called SRX Neuroscience. The journal is “peer-reviewed open-access”. The email continued: “There are many reasons to submit your work to SRX Neuroscience, including an efficient online submission process, no page limits or restrictions on large data sets, immediate publication upon acceptance, and free accessibility of articles without any barriers to access, which increases their visibility.”

I’d never heard of it. Its web page didn’t open. The website for SRX (short for Scholarly Research Exchange) was extremely vague: no names, no location. And no sign of how it was funded.

Finally I learned that SRX is run by Hindawi Publishing, in Egypt. From this excellent overview I learned its money comes from author fees, $500 or more per article. They are trying a new kind of editorship: 30 editors or more per journal. Each editor handles only two articles a year and receives a 50% discount when they themselves submit an article. (I wonder what referees get.) Meanwhile, BioMed Central, a better-known open-access publisher, is having trouble: They have been forced to raise their charges to libraries so high that Yale decided to cancel.

It seems very low-rent. But, as Clayton Christensen told in The Innovator’s Dilemma, this is often how important new things begin. In the beginning hydraulic shovels were only good for digging a ditch in your backyard. The makers of cable-powered shovels, whose products made the giant holes for skyscrapers, turned up their noses at such a low-prestige task. But the hydraulic shovels got better and better. Companies that made cable-powered shovels eventually went bankrupt.

Impressive Versus Effective

A profile of James Patterson, the hyperprolific novelist, says this:

“I don’t believe in showing off,” Patterson says of his writing. “Showing off can get in the way of a good story.”

A few days ago, just before this profile appeared, I gave a talk about self-experimentation at EG (= Entertainment Gathering), a TED-like conference in Monterey. One reason my self-experimentation was effective, I said, was that I wasn’t trying to impress anyone. Whereas professional scientists doing professional science care a lot about impressing other people. I planned to say it like this but didn’t have enough time:

Years ago, I went to a dance concert put on by students at Berkeley High School. I really enjoyed it. I thought to myself: I like dance concerts. So I went to a dance concert by UC Berkeley students — college students. I enjoyed it, but not as much as the high school concert. Then I went to a dance concert by a famous dance company that all of you have heard of. I didn’t enjoy it at all. Why were the professionals much less enjoyable than the high school students? Because the professionals cared a whole lot about being impressive. That got in the way of being enjoyable. Scientists want to be impressive. They want to impress lots of people — granting agencies, journal editors, reviewers, their colleagues, and prospective graduate students. All this desire to be impressive gets in the way of finding things out.

In particular, it makes self-experimentation impossible:

They can’t do self-experimentation because it isn’t impressive. Self-experimentation is free. Anyone can do it. It’s easy; it doesn’t require any rare or difficult skills. If you want to impress someone with your fancy car, self-experimentation is like riding a bike.

Because my self-experimentation was private, I was free to do whatever worked.

My broader point was that my self-experimentation was effective partly because I was an insider/outsider. I had the subject-matter knowledge of an insider, but the freedom of an outsider.

Influential Statisticians

This article (“Ten statisticians and their impacts for psychologists”) impressed me. It’s a lot more accessible and basic than the usual academic article. However, my list — of the statisticians who’ve had the biggest effect on how I analyze data — is much different than his. From more to less influential:

1. John Tukey. From Exploratory Data Analysis I learned to plot my data and to transform it. A Berkeley statistics professor once told me this book wasn’t important!

2. John Chambers. Main person behind S. I use R (open-source S) all the time.

3. Ross Ihaka and Robert Gentleman. Originators of R. R is much better than S: Fewer bugs, more commands, better price.

4. William Cleveland. Inventor of loess (local regression). I use loess all the time to summarize scatterplots.

5. Ronald Fisher. I do ANOVAs.

6. William Gosset. I do t tests.

My data analysis is 90% graphs, 10% numerical summaries (e.g., means) and statistical tests (e.g., ANOVA). Whereas most statistics texts are about 1% graphs, 99% numerical summaries and statistical tests.

Science of Everyday Life: Why “Boys and Girls”? Why Not “Girls and Boys”?

I try to connect my self-experimentation to other intellectual activity. One broader category is the stunning single case — the single example that makes you think new thoughts. Another is superhobbies (activities done with the freedom of hobbyists but the skills of professionals). Superhobbies lie between hobbies and skilled jobs. A third is my position as an insider/outsider. I was close enough to sleep research to understand it but far enough away to ignore all their rules about what you can and cannot do. I had the knowledge of an insider but the freedom of an outsider.

A fourth broader category is the science of everyday life — meaning science that involves everyday life and can be done by most of us. My experiments cost almost nothing, required no special equipment or circumstances. They involved common concerns (e.g., how to sleep better) and tested treatments available to everyone (e.g., standing more, eating more animal fat). A post by Mark Liberman at Language Log has a nice non-experimental example of this category. The question is about word order in gender pairs. Why do we say “boys and girls” more often than “girls and boys”? Or “husbands and wives” more often than “wives and husbands”? There are plenty of such pairs, not all with male first (e.g., “ladies and gentlemen”). The several possible explanations can be tested in lots of ways that require no fancy equipment or data. As Liberman says,

A smart high-school student could do a neat science-fair project along these general lines.

A great feature of what Liberman is proposing is that the answer isn’t obvious. There isn’t a “correct” answer as there is in so much of the way that science is taught (e.g., physics labs, demonstrations). If I searched for examples of “science of everyday life” i would merely find canned demos, which have little in common with the practice of science. Whereas Liberman’s idea gets to the heart of it, at least the hypothesis-testing part.

Thanks to Stephen Marsh.

Modern Biology = Cargo-Cult Science (continued)

In an earlier post I pointed out that modern molecular biology has one big feature in common with cargo-cult science (activities with the trappings but not the substance of science): relentless over-promising. David Horrobin, in a 2003 essay, agreed with me:

Those familiar with medical research funding know the disgraceful campaigns waged in the 70s and 80s by scientists hunting the genes for such diseases as cystic fibrosis. Give us the money, we’ll find the gene and then your problems will be solved was the message. The money was found, the genes were found – and then came nothing but a stunned contemplation of the complexity of the problem, which many clinicians had understood all along.

During the question period of a talk by Laurie Garrett about science writing at the UC Berkeley School of Journalism, I said there was a kind of conspiracy between scientists and journalists to make research results (in biology/health) appear more important than they really were. Oh, no, said Garrett. If she’s right, then journalists are completely credulous. They have no idea they’re being scammed. If I wrote a book called The Real Scientific Method, there would be a whole chapter on better ways (cool data) and worse ways (over-promising) to promote your work.

The discovery of leptin, the hormone that tells the brain how much fat you have, was front-page news in 1994. Supposedly this discovery would help people lose weight. It is now abundantly clear that it hasn’t and won’t. The discoverer of leptin, Jeffrey Friedman, gave a talk at UC Berkeley several years ago and resembled a deer caught in the headlights. All he knew — following the party line — was that genetics was important. That genetics was so obviously not the reason for the obesity epidemic . . . he didn’t mention. This interview gives a sampling of his views. He really does believe in the primacy of genes:

Over the years, Dr. Friedman says, he has watched the scientific data accumulate to show that body weight, in animals and humans, is not under conscious control. Body weight, he says, is genetically determined, as tightly regulated as height.

Never mind animal and human experiments that show adult body weight is controlled by recent diet. Adult height is not controlled by recent diet. What about the obesity epidemic? Well,

“Before calling it an epidemic, people really need to understand what the numbers do and don’t say,” he said.

This is what one molecular biologist — a professor at Rockefeller University — is reduced to: telling us what data collected by other people “do and don’t say”. Not to mention qualifying the obvious (Americans are much fatter now than 50 years ago). I’m sure his lab has all the trappings of modern science. But the planes don’t land.

A journalist named David Freedman has figured this out.

Appreciative Thinking and Buddhism

After I mentioned appreciative thinking in a recent post, my friend Carl Willat wrote me:

Part of Buddhism I think is that gratitude is the secret to happiness. Â It’s always possible to want more, so you won’t be happy by trying to get all the things you want. Instead, being grateful for what you have is where happiness lies.

That’s a good way to put it. Not matter what article you read, no matter what study you do, there are always ways it could be better (what others call flaws). Be grateful for what the article or study tells you. That’s how to learn something from it.

Physicists Disagree about Climate Change

Here is a statement from Hal Lewis, a physics professor at UC Santa Barbara, in answer to a question from CBS News:

I know of nobody who denies that the Earth has been warming for thousands of years without our help (and specifically since the Little Ice Age a few hundred years ago), and is most likely to continue to do so in its own sweet time. The important question is how much warming does the future hold, is it good or bad, and if bad is it too much for normal adaptation to handle. The real answer to the first is that no one knows, the real answer to the second is more likely good than bad (people and plants die from cold, not warmth), and the answer to the third is almost certainly not. And nobody doubts that CO2 in the atmosphere has been increasing for the better part of a century, but the disobedient temperature seems not to care very much. And nobody denies that CO2 is a greenhouse gas, along with other gases like water vapor, but despite the claims of those who are profiting by this craze, no one knows whether the temperature affects the CO2 or vice versa. The weight of the evidence [suggests] the former.

That’s reasonable. Here is a statement from another physicist, a friend of mine and Andrew Gelman’s:

Like a lot of scientists — I’m a physicist — I assumed the “Climategate” flap would cause a minor stir but would not prompt any doubt about the threat of global warming, at least among educated, intelligent people. The evidence for anthropogenic (that is, human-caused) global warming is strong, comes from many sources, and has been subject to much scientific scrutiny. Plenty of data are freely available. The basic principles can be understood by just about anyone, and first- and second-order calculations can be performed by any physics grad student. Given these facts, questioning the occurrence of anthropogenic global warming seems crazy. (Predicting the details is much, much more complicated). [He seems to miss the point here. The usual claim is that man-made warming is large relative to other global temperature changes. That’s not predictable “by any physics grad student” and to call it a “detail” is misleading. — Seth] And yet, I have seen discussions, articles, and blog posts from smart, educated people who seem to think that anthropogenic climate change is somehow called into question by the facts that (1) some scientists really, deeply believe that global warming skeptics are wrong in their analyses and should be shut out of the scientific discussion of global warming, and (2) one scientist may have fiddled with some of the numbers in making one of his plots. This is enough to make you skeptical of the whole scientific basis of global warming? Really?

At risk of sounding v smug, my views have changed only a little. I already thought the consensus was more fragile than it appeared. That’s just a general truth about modern science. I was already skeptical of climate models because I knew how easily modelers fool themselves. I began to believe the consensus was not just fragile but wrong when I heard the story of the Yamal tree ring data — the long refusal to supply the raw data and, when the researcher’s hand was forced and the data finally supplied, the way it contradicted the claims that had been made. Climategate didn’t vastly change what I thought; it provided more evidence for ideas I already had.

Another friend of mine used to be a math professor. He has views similar to the views of my physicist friend. “Look,” I said to him, “if you want to argue that humans are causing major global warming you should at least show it’s warmer now than in the past. Even that isn’t true. The Medieval Warm Period.” “That was only in Europe,” he replied. Actually, there is evidence of the same thing in the Gulf of Mexico.