Assorted Links

Design Farmer

A friend of mine majored in design at Tsinghua and is now working as a designer. Her opinion of her education has gone down. Designers from other schools are better trained than she is, she sees.

At Tsinghua, her teachers denigrated learning to use this or that software program. To design something using a computer program was to be a design farmer, they said. They preferred to talk about big ideas. “I hate big ideas,” said my friend.

Her comments reminded me of law professors who would rather teach philosophy than how to be a lawyer (and are surprised when students play solitaire during class) and education professors who don’t teach their students how to teach.

The Contribution of John Ioannidis

From an excellent Atlantic article about John Ioannidis, who has published several papers saying that medical research is far less reliable than you might think:

A different oak tree at the site provides visitors with a chance to try their own hands at extracting a prophecy. “I [bring] all the researchers who visit me here, and almost every single one of them asks the tree the same question,” Ioannidis tells me . . . “’Will my research grant be approved?'”

A good point. I’d say his main contribution, based on this article, is pointing out the low rate of repeatability of major medical findings. Until someone actually calculated that rate, it was hard to know what it was, unless you had inside experience. The rate turned out to be lower than a naive person might think. It was not lower than an insider might think, which explains lack of disagreement:

David Gorski . . . noted in his prominent medical blog that when he presented Ioannidis’s paper on [lack of repeatability of] highly cited research at a professional meeting, “not a single one of my surgical colleagues was the least bit surprised or disturbed by its findings.”

I also like the way Ioannidis has emphasized the funding pressure that researchers face, as in that story about the oak tree. Obviously it translates into pressure to get positive results, which translates into overstatement.

I also think his critique of medical research has room for improvement:

1. Black/white thinking. He talks in terms of right and wrong. (“We could solve much of the wrongness problem, Ioannidis says, if the world simply stopped expecting scientists to be right. That’s because being wrong in science is fine.”) This is misleading. There is signal in all that medical research he criticizes; it’s just not as strong a signal as the researchers claimed. In other words the research he says is “wrong” has value. He’s doing the same thing as all those meta-analyses that ignore all research that isn’t of “high quality”.

2. Nihilism (which is a type of black/white thinking). For example,

How should we choose among these dueling, high-profile nutritional findings? Ioannidis suggests a simple approach: ignore them all.

I’ve paid a lot of attention to health-related research and benefited greatly. Many of the treatments I’ve studied through self-experimentation were based on health-related research. An example is omega-3. There is plenty of research suggesting its value and this encouraged me to try it. Likewise, there is plenty of evidence supporting the value of fermented foods. That evidence and many other studies (e.g., of hormesis) paint a large consistent picture.

3. Bias isn’t the only problem, but, in this article, he talks as if it is. Bias is a relatively minor problem: you can allow for it. Other problems you can’t allow for. One is the Veblenian tendency to show off. Thus big labs are better than small ones, regardless of which would make more progress. Big studies better than small, expensive equipment better than cheap, etc. And, above all, useless is better than useful. The other is a fundamental misunderstanding about what causes disease and how to fix it. A large fraction of health research money goes to researchers who think that studying this or that biochemical pathway or genetic mechanism will make a difference — for a disease that has an environmental cause. They are simply looking in the wrong place. I think the reason is at least partly Veblenian: To study genes is more “scientific” (= high-tech = expensive) than studying environments.

Thanks to Gary Wolf.

The Nobel Prize: Not Helping

Nassim Taleb recently criticized the Nobel Prize in Economics:

According to Taleb, there are a number of mistaken ideas about forecasting and measuring risk, which all contribute to events like the 2008 global crisis. The Nobel prize, he says, has given them a stamp of approval, allowing them to propagate.

It isn’t just economics. As I’ve said before, the Nobel Prize in medicine was not given for the discovery that smoking causes lung cancer. It was not given for the discovery that lack of folate causes birth defects. Both enormously useful. It has been given for several discoveries, such as the connection between teleomeres and aging, with (so far) little or no practical value.

This is no mystery. The Nobel Prize must be prestigious, therefore must honor high-prestige research. Veblen argued long ago that in academia high prestige correlates with low practical value. Just today I told a friend Veblen’s idea that professors use jargon for the same reason men wear ties — to show off how useless they are. The economics research (“Harry Markowitz, William Sharpe, Robert Merton, Myron Scholes, Robert Engle, Franco Modigliani and Merton Miller”) that Taleb is criticizing was high prestige. The so-far-useless biology that has received a Nobel Prize was high prestige; the highly-useful epidemiology that didn’t receive the prize was low prestige.

Thanks to Dave Lull.

The Irony of What Works

After posting about Doug Lemov, I ordered Teach Like a Champion. It arrived yesterday. Leafing through it, I came across a section titled “The Irony of What Works,” which begins:

One of the biggest ironies I hope you will take away from reading this book is that many of the tools likely to yield the strongest classroom results remain essentially beneath the notice of our theories and theorists of education.

Lemov continues with an example: Teaching students how to distribute classroom materials, such as handouts. This can save a lot of time. Then he adds:

Unfortunately this dizzyingly efficient technique — so efficient it is all but a moral imperative for teachers to use it — remains beneath the notice of our avatars of educational theory. There isn’t a school of education that would stoop to teach its aspiring teachers how to train their students to pass out papers.

The last chapter of Veblen’s Theory of the Leisure Class is about just this — the importance that professors (like everyone else) place on status display and how this interferes with their effectiveness. The connection with self-experimentation is that no matter how effective it is, no psychology department would stoop to teach it. Or, at least, that’s the current state of affairs.

The book’s index doesn’t include Veblen, although it does include Richard Thaler.

More Flight From Data

I’ve blogged many times about the desire of professors to show off and how it interferes with being useful. It doesn’t just make them bad teachers, it makes them bad scientists. Here’s an example from economics (via Marginal Revolution):

“The mainstream of academic research in macroeconomics puts theoretical coherence and elegance first, and investigating the data second,” says Mr. Rogoff. For that reason, he says, much of the profession’s celebrated work “was not terribly useful in either predicting the financial crisis, or in assessing how it would it play out once it happened.”

“[Academic economists] almost pride themselves on not paying attention to current events,” he says.

Pure Veblen, who in Theory of the Leisure Class provided many examples of people, including professors, priding themselves on being useless. Men wear ties, he said, to show they don’t do manual labor (which is clearly useful).

My research is closer to biology, where you can say the same thing: much of the profession’s celebrated work has not been terribly useful. Yesterday I gave an example (the oncogene theory of cancer).

Modern Veblen: Flight From Data.

Show-Off Professors

A new Jeffrey Eugenides short story quotes Derrida. Quote 1:

In that sense it is the Aufhebung of other writings, particularly of hieroglyphic script and of the Leibnizian characteristic that had been criticized previously through one and the same gesture.

Quote 2:

What writing itself, in its nonphonetic moment, betrays, is life. It menaces at once the breath, the spirit, and history as the spirit’s relationship with itself. It is their end, their finitude, their paralysis.

“A little Derrida goes a long way and a lot of Derrida goes a little way,” said a friend of mine who was a graduate student in English. These quotes show why. In Theory of the Leisure Class, Veblen argued that professors write like this (and assign such stuff to their students) to show status. I have yet to hear a convincing refutation of this explanation nor a plausible alternative. Is there a plausible alternative?

Veblen was saying that professors are like everyone else. Think of English professors as a model system. Their showing-off is especially clear. It’s pretty harmless, too, but when a biology professor (say) pursues a high-status line of research about some disease rather than a low-status but more effective one, it does — if it happens a lot — hurt the rest of us. Sleep researchers, for example, could do lots of self-experimentation but don’t, presumably because it’s low-status. And poor sleep is a real problem. Throughout medical school labs, researchers are studying the biochemical mechanism and genetic basis of this or that disorder. I’m sure this is likely to be less effective in helping people avoid that disorder than studying its environmental roots, but such lines of research allow the researchers to request expensive equipment and work in clean isolated laboratories — higher status than cheap equipment and getting your hands dirty. I don’t mean high-status research shouldn’t happen; we need diversity of research. But, like the thinking illustrated by the Derrida quotes, there’s too much of it. A little biochemical-mechanism research goes a long way and lot of biochemical-mechanism research goes a little way.

Oprah Meets Veblen

An assistant manager at Marshall Fields, the Chicago department store, told Gawker the following story:

I was walking through the floor, and I hear a voice call my name. . . . Once she started speaking to me, I realized it was Oprah. Honestly, she is unrecognizable without the spackle/wig. Anyway, she was very nice, and asked me if I would offer my opinion on a china pattern she was looking at for her house. It was Villeroy and Boch (German, middle-range) “Petite Fleur.” Very cute, kind of French-country, with a small, scattered floral design. I said, “What’s not to like?” Oprah responded, “Well, it’s not that expensive, and I don’t want people who come to my house to think I’m cheap.”

Andrew Gelman’s Top Statistical Tip

Andrew Gelman writes:

If I had to come up with one statistical tip that would be most useful to you–that is, good advice that’s easy to apply and which you might not already know–it would be to use transformations. Log, square-root, etc.–yes, all that, but more! I’m talking about transforming a continuous variable into several discrete variables (to model nonlinear patterns such as voting by age) and combining several discrete variables to make something [more] continuous (those “total scores” that we all love). And not doing dumb transformations such as the use of a threshold to break up a perfectly useful continuous variable into something binary. I don’t care if the threshold is “clinically relevant” or whatever–just don’t do it. If you gotta discretize, for Christ’s sake break the variable into 3 categories.

I agree (and wrote an article about it). Transforming data is so important that intro stats texts should have a whole chapter on it — but instead barely mention it. A good discussion of transformation would also include use of principal components to boil down many variables into a much smaller number. (You should do this twice — once with your independent variables, once with your dependent variables.) Many researchers measure many things (e.g., a questionnaire with 50 questions, a blood test that measures 10 components) and then foolishly correlate all independent variables with all dependent variables. They end up testing dozens of likely-to-be-zero correlations for significance. Thereby effectively throwing all their data away — when you do dozens of such tests, none can be trusted.

My explanation why this isn’t taught differs from Andrew’s. I think it’s pure Veblen: professors dislike appearing useful and like showing off. Statistics professors, like engineering professors, do less useful research than you might expect, so they are less aware than you might expect of how useful transformations are. And because most transformations don’t involve esoteric math, writing about them doesn’t allow you to show off.

In my experience, not transforming your data is at least as bad as throwing half of it away, in the sense that your tests will be that much less sensitive.

Journalists and Scientists

A few days ago I quoted an editor who works for Rupert Murdoch as saying that journalists care too much about impressing their colleagues and winning prizes and not enough about helping readers. Here is Walter Pincus, a Washington Post reporter, saying the same thing:

Editors have paid more attention to what gains them prestige among their journalistic peers than on subjects more related to the everyday lives of readers. For example, education affects everyone, yet I cannot name an outstanding American journalist on this subject.

I quote this to support the Veblenian view I’ve expressed many times on this blog — that scientists would rather do what gains them prestige among their peers than what helps the rest of us, who support most science. I think it’s hard to understand the success of my self-experimentation (e.g., new ways of losing weight) until you understand this aspect of science. I was successful partly because my motivation was different.