The Decline of Harvard

In high school, I learned a lot from Martin Gardner‘s Mathematical Games column in Scientific American. I read it at the Chicago Public Library on my way home from school while transferring from one bus line to another — thank heavens transfers were good for two hours. In college, it was long fact articles in The New Yorker. Now it’s Marginal Revolution, where I recently learned:

Harvard has also declined as a revolutionary science university from being the top Nobel-prize-winning institution for 40 years, to currently joint sixth position.

The full paper is here.

What should we make of this? Clayton Christensen, the author of The Innovator’s Dilemma (excellent) and a professor at the Harvard Business School, has been skeptical of Harvard’s ability to maintain its position as a top business school. He believes, based on his research and the facts of the matter, that it will gradually lose its position due to down-market competitors such as Motorola University and the University of Phoenix, just as Digital Equipment Corporation, once considered one of the best-run companies in the world, lost its position. A few years ago, in a talk, he described asking 100 of his MBA students if they agreed with his analysis. Only three did.

How would we know if Harvard was losing its luster? Christensen asked a student who strongly disagreed with him. Harvard business students (except Christensen’s) are taught to base their decisions on data. So Christensen put the question like this: If you were dean of the business school, what evidence would convince you that this was happening and it was time to take corrective action?

When the percentage of Harvard graduates among CEO’s of the top 1000 international companies goes down, said the student.

But by then it will be too late, said Christensen. His students agreed: By then it would be too late to reverse the decline.

Christensen’s research is related to mine, oddly enough — we both study innovation. For explicit connections, see the Discussion section of this article and the Reply to Commentators section of this one.

Secrets of a Successful Blog (part 2)

Aaron Swartz is an excellent software developer (co-founder of reddit), a creative and interesting writer, and a successful blogger, judging by number of comments. I asked him what makes a blog successful. Three things, he said:

1. Persistence. Readership builds over time.

2. Frequency. The more often, the better. It is pure operant conditioning (although Aaron, a fan of anti-behaviorist Alfie Kohn, did not use that term): When people check your blog and find new content they are rewarded, and keep checking. If they check and find nothing new, they stop checking. Although Aaron uses an aggregator (which does the checking), only about 15% of blog readers do so, he said. (I use Sage, a Firefox add-on.) Aaron posts every day or so.

3. A distinct voice. When people visit your blog they should know what to expect. When he started he blogged about all sorts of things but has become more consistent from one entry to the next.

Part 1 (Marginal Revolution co-author Tyler Cowan’s view) is here, with comments here.

American Haiku

The American version of haiku, I submit, is a Priceless ad. My contributions:

The Shangri-La Diet: $15 (including shipping)
bottle of grapeseed oil: $6
additional groceries each month: -$200
not worrying where your next Yodel is coming from: priceless

Note to SLD dieters: The reference to grapeseed oil dates this. I now drink refined walnut oil and flaxseed oil (nose-clipped).

smaller pants: $60
blush I use as excuse for better-looking skin: $8
blood test for improved lipids: $80
migraine-free TOM: priceless

Short blog posts are a little like haiku.

Update (7 Dec 06): funny coincidence.

The Invisible Made Visible

An artist, UC Santa Cruz professor of art history Mary Holmes would say, is someone who makes the invisible visible. Does that make the Internet an artist? These examples of the invisible made visible impress me:

1. Security footage of a man stealing two chairs. (Thanks to HuntGrunt.)

2. Tracking data at the Shangri-La Diet forums reveal what weight loss is like for other people.

I think the other extreme — the very visible made extremely visible — is also art. Here is an example: David Caruso one-liners. Too funny not to be art.

On Scientific Method

When I visited George Mason University recently, I asked Tyler Cowen, “What’s the secret of a successful blog?” Cowen and Tabarrok’s Marginal Revolution is the most successful blog I know of.

His answer: “Three elements: 1. Expertise. 2. Regularity. 3. Recurring characters, like a TV show.” By regularity he meant at least 5 times/week.

I saw I had considerable room for improvement. Since then, I’ve tried to post at least twice/week. With this post I am adding scientific method to the subtitle, which I hope makes me appear more expert. A Berkeley philosophy professor named Paul Feyeraband wrote a book that I thought is called On Method but that I see is actually called Against Method. He was at Berkeley when I arrived. I remember two things about him: 1. He gave all his students A’s. 2. He ate at Chez Panisse every night.

CIA Fun Facts

Tonight, at a panel discussion at UC Berkeley that was part of The New Yorker College Tour, I learned two things about Central Intelligence Agency headquarters in Langley, Virginia:

1. There are scales in the bathrooms (according to Lawrence Wright).

2. There is a gift shop that sells CIA golf balls and the like. By the register is a notice: “If you are a covert operative, don’t use your credit card” (according to Jeffrey Goldberg).

The big shock, however, was neither of these. It was, as Hilary Goldstine pointed out, that there were almost no undergraduates in the audience. Which speaks volumes about UC Berkeley. It was a great discussion. Jane Mayer was the third discussant and Orville Schell the moderator.

The Trouble With Rigor

This is an easy question: When writing down numbers, when is it bad to be precise? Answer: When you exceed the precision to which the numbers were measured. If a number was measured with a standard error of 5 (say), don’t record it as 150.323.

But this, apparently, is a hard question: When planning an experiment, when it is bad to be rigorous? Answer: When the effort involved is better used elsewhere. I recently came across the following description of a weekend conference for obesity researchers (December 2006, funded by National Institute of Diabetes & Digestive & Kidney Diseases):

Obesity is a serious condition that is associated with and believed to cause much morbidity, reduced quality of life, and decreased longevity. . . . Currently available treatments are only modestly efficacious and rigorously evaluating new (and in some cases existing) treatments for obesity are clearly in order. Conducting such evaluations to the highest standards and so that they are maximally informative requires an understanding of best methods for the conduct of randomized clinical trials in general and how they can be tailored to the specific needs of obesity research in particular. . . . We will offer a two-day meeting in which leading obesity researchers and methodologists convene to discuss best practices for randomized clinical trials in obesity.

Rigorously evaluating new treatments”? How about evaluating them at all? Evaluation of new treatments (such as new diets) is already so difficult that it almost never occurs; here is a conference about how to make such evaluations more difficult.

This mistake happens in other areas, too, of course. Two research psychiatrists have complained that misguided requirements for rigor have had a very bad effect on finding new treatments for bipolar disorder.

Too Few Riders, Too Many Stolen Bases

I heard two excellent talks last week. Bent Flyvbjerg, a professor of Planning at Aalborg University, Aalborg, Denmark , spoke on “Survival of the Unfittest: Why the Worst Megaprojects [subways, airports, bridges, tunnels] Get Built.” Why? Because of false claims. Cost estimates turn out to be much too low and benefit estimates (such as ridership) much too high. Boston’s Big Dig, for example, has already cost more than three times the original estimate. Cost estimates were too low in 90% of projects, Flyvbjerg said. The tools used to make those estimates have supposedly improved a great deal over the last few decades but their accuracy has not improved. Lovallo and Kahneman have argued that the underlying problem is “ optimism bias“; however, Flyvbjerg believes that the problem is what he now calls strategic misrepresentation — when he used the term lying people got upset. The greater the misrepresentation, the more likely the project would be approved — or rather the greater the truth the more likely the project would not be approved. That is a different kind of bias. An everyday example is me and my microwave oven. Sometimes I use my microwave oven to dry my clothes. I’ve done this dozens of times but I continue to badly underestimate how long it will take. I guess that a shirt will take 8 minutes to dry; it takes 15 minutes. I know I underestimate — but I keep doing it. This is not optimism bias. Microwaving is not unexpectedly difficult or unpredictable. The problem, I think, is the asymmetry of the effects of error. If my guess is too short, I have to put the shirt back in the microwave, which is inconvenient; if my guess is too long the shirt may burn — which corresponds to the project not being approved.

Incidentally, Flyvjberg has written a paper defending case studies and by extension self-experimentation. He quotes Hans Eysenck, who originally dismissed case studies as anecdotes: “Sometimes we simply have to keep our eyes open and look carefully at individual cases — not in the hope of proving anything but rather in the hope of learning something.” Exactly.

The other excellent talk (”Scagnostics” — scatterplot diagnostics) was by Leland Wilkinson, author of The Grammar of Graphics and developer of SYSTAT, who now works at SPSS. He described a system that classifies scatterplots. If you have twenty or thirty measures on each of several hundred people or cities or whatever, how do you make sense of it? Wilkinson’s algorithms measure such properties of a scatterplot as its texture, clumpiness, skewness, and four others I don’t remember. You use these measures to find the most interesting scatterplots. He illustrated the system with a set of baseball statistics — many measurements made on each of several hundred major-league baseball players. The scatterplot with the most outliers was stolen bases versus age. Stolen bases generally decline with age but there are many outliers. Although a vast number of statistical procedures assume normal distributions, Wilkinson’s tools revealed normality to be a kind of outlier. In the baseball dataset, only one scatterplot had both variables normally distributed: height versus weight. These tools may eventually be available with R.

David Jenkins on the Shangri-La Diet

David Jenkins, a professor of nutrition at the University of Toronto, invented the glycemic index, probably the most important nutritional innovation of the last thirty years. The glycemic index helped me permanently lose 6 pounds (see Example 7 of this paper). While preparing her CBC piece about the Shangri-La Diet, Sarah Kapoor interviewed Jenkins. Here is a partial transcript of what he said.