Myopia Increases Innovation

Big public works projects inevitably cost far more than the original budget. I heard a talk about this a few years ago. The speaker gave many examples, including Boston’s Big Dig. His explanation was that these projects would not be approved if voters were told the truth. The German newspaper magazine Der Spiegel has just published an interview with several architects responsible for recent German projects with especially large discrepancies between what people were told at the beginning and the unfolding reality — Berlin’s new airport, for example. The article’s headline calls them “debacles”. One architect gives the same explanation as the speaker I heard: “The pure truth doesn’t get you far in this business. The opera house in Sydney would never have been approved if they had known how much it would cost from the start.”

I disagree. I see the same massive underestimation of time and effort in projects that I do and that my colleagues and friends do, projects we do for ourselves that require no one’s approval. I think something will take an hour. It takes five hours. Plainly the world is more complicated than our mental model of it, sure, but there is more to it than that. Someone did a survey of people in Maryland who had been in a car accident so bad they had had to go to the hospital. Within only a year, a large fraction of them (half?) had forgotten about it. When asked if within the last year they had had an accident so bad they were hospitalized, they said no. Apparently we forget difficulties, even extreme ones, really fast. If you forget difficulties, you will underestimate them.

If I had realized how difficult everything would be, I couldn’t have done any of it is one explanation, which I’ve heard attributed to Gregory Bateson. From Malcolm Gladwell’s excellent review in this week’s New Yorker of a biography of Albert Hirschman, the economist, I learned that Hirschman — had he realized that this was human nature — would have had a different evolutionary explanation: We underestimate difficulties because this way of thinking increases innovation. Debacle . . . or opportunity? Difficulty is the mother of invention.

 

 

 

Assorted Links

  • An Epidemic of Absence (book about allergies and autism)
  • Professor of medicine who studies medical error loses a leg due to medical error. “Despite calls to action by patient advocates and the adoption of safety programs, there is no sign that the numbers of errors, injuries and deaths [due to errors] have improved.” Nothing about consequences for the person who made the error that caused him to lose a leg.
  • Doubts about spending a huge amount of research money on a single project (brain mapping). Which has yet to produce even one useful result.
  • Cancer diagnosis innovation by somebody without a job (a 15-year-old)
  • Someone named Rob Rhinehart has greatly reduced the time and money he spends on food by drinking something he thinks contains all essential nutrients. Someone pointed out to him that he needs bacteria, which he doesn’t have. (No doubt several types of bacteria are best.) He doesn’t realize that Vitamin K has several forms. I suspect he’s getting too little omega-3. This reminds me of a man who greatly reduced how much he slept by sleeping 15 minutes every 3 hours. It didn’t work out well for him (his creativity vanished and he became bored and unhappy). In Rhinehart’s case, I can’t predict what will happen so it’s fascinating. When something goes wrong, however, I’ll be surprised if he can figure out what caused the problem.

Thanks to Amish Mukharji.

Online Teaching Versus What?

Is online teaching (e.g., MOOC) a big deal? In an essay (“Why Online Education Works”), Alex Tabarrok argues for the value of online education (meaning online lectures) compared to traditional lectures. A friend told me yesterday that MOOC was “a frontier of pedagogy”. No doubt online lectures will make lecture classes cheaper and more available. Lots of things have gone from scarce/expensive to common/cheap. With things whose effects we understand (e.g., combs), the result is straightforward: more people benefit. With things whose effects we don’t understand, the results are less predictable. Did the spread of sugar help us? Hard to say. Did the spread of antibiotics help us? Hard to say. It may have helped sustain simplistic ideas about what causes disease (e.g., “acne is caused by bacteria”, “ulcers are caused by bacteria”) reducing effective innovation. Do we have a good idea of the effects of lectures (or their lack of effect), or a good theory of college education? I don’t think so. Could their spread help sustain simplistic ideas about education? Maybe.

As books spread, the teaching of reading increased. Everyone understood that books were useless if people couldn’t read. The introduction of PCs was accompanied by user interface improvements. This helped PCs become influential– not restricted to hobbyists. Will online education be accompanied by similar make-it-more-palatable changes? I have heard nothing about this. Their advocates seem to think the current system is fine and if it could only be available to more people…

Online lectures will make much difference only if the cost and quality of lectures is the weakest link in what strikes me as a process with many links. It would be a coincidence if the link that can be most easily strengthened turned out to be the weakest link. For example, is the cost of lectures the main thing driving up the cost of college? That would be wonderful if it were true, but I haven’t seen evidence that it’s true. At Berkeley, for example, there has been enormous growth in the administrator-to-faculty ratio.

Here are two arguments used to argue that online lectures are a big step forward:

It will help people in poor countries, like Zambia. There is a long history of people in rich countries misunderstanding people in poor countries. Several years ago I was in Guatemala. I heard about a school being built by a (rich country) religious group in a poor area. After two years, the American running it wanted to leave. No member of the community took it over. It disappeared. “Maybe they didn’t want a school,” said the graduate student who told me about it. Maybe few people in Zambia want online lecture classes. (I have no idea.) If so, the benefit will be small.

It will save labor. Each lecture will be viewed many more times. Saving labor is not always good. It is plausible that the growth of online lectures will mean fewer college professors. Colleges and universities are among the few places where people do research and almost the only places where they do unrestricted research. Most of the research is useless; a tiny fraction is enormously useful. At the moment, lectures subsidize research. By giving lectures, professors are allowed to do research. Fewer professors, less unrestricted research, less innovation. “Wasteful” lecturing might be labor we shouldn’t save.

One thing I like about online classes is the possibility they will connect people who want to learn the same thing, like ordinary classes do. They can help each other, encourage each other, and so on. I have no doubts about the value of this. (I find language partners — I teach them English, they teach me Chinese — way more pleasant and helpful than tutors.)

At Berkeley, I tried to find good lecturers. With two exceptions (Tim White and Steve Glickman) I failed. Almost all lectures, even those by brilliant researchers, were dreary. (A shining exception by Robin Hanson.) They suffered from a lack of stories and a lack of emotion. (At Tsinghua, things are worse. A friend who majors in bioengineering told me that 80% of her teachers lecture by reading from the textbook.) The power of professors over students in some ways resembles the power of doctors over patients. Just as there is little pressure on doctors to understand disease (if antibiotics have bad effects, it doesn’t harm the doctor who prescribed them), there is little pressure on most professors — at least at the elite research universities that produce online lectures — to understand education. At Berkeley, many professors say they teach their undergraduate students “how to think” or “how to think critically”. In fact, they were teaching their students to imitate them. The simplest form of education. This is neither good nor bad — it depends on the student — but it is the opposite of sophisticated.

A few months ago I assigned my Tsinghua students (freshmen) to read 60 pages of The Man Who Would Be Queen by Michael Bailey, a book full of stories and emotion. Any 60 pages, their choice. No test, no written assignment, no grade. One student told me it was the first book in English she’d ever finished. It was so good she couldn’t stop reading. My assignment had changed real-life behavior: what my student read in her spare time. Maybe it changed her tolerance of homosexuality and the tolerance of those around her. My assignment (not a textbook or academic paper, not a fixed reading) and evaluation (none) differed from conventional college teaching. Experiences like this make me wonder what fraction of important learning during college happens due to lecture classes. (In my case, the fraction was zero.) If the fraction is low, it suggests that online learning won’t make much difference.

How Helpful Are New Drugs? Not So Clear

Tyler Cowen links to a paper by Frank Lichtenberg, an economist at Columbia University, that tries to estimate the benefits of drug company innovation by estimating how much new drugs prolong life compared to older drugs. The paper compares people equated in a variety of ways except the “vintage” (date of approval) of the drugs they take. Does taking newer drugs increase life-span? is the question Lichtenberg wants to answer. He concludes they do. He says his findings “suggest that two-thirds of the 0.6-year increase in the life expectancy of elderly Americans during 1996-2003 was due to the increase in drug vintage” — that is, to newer drugs.

An obvious problem is that Lichtenberg has not controlled for health-consciousness. This is a standard epidemiological point. People who adopt Conventional Healthy Behavior X (e.g., eat less fat) are more likely to adopt Conventional Healthy Behavior Y (e.g., find a better doctor) than those who don’t. For example, a study found that people who drink a proper amount of wine eat more vegetables. Another reason for a correlation between conventionally-healthy practices is mild depression. People who are mildly depressed are less likely to do twenty different helpful things (including “eat healthy” and “find a better doctor”) than people who are not mildly depressed. (And mild depression seems to be common.) Perhaps doctors differ. (Lichtenberg concludes there are big differences.) Perhaps better doctors (a) prescribe more recent drugs and (b) do other things that benefit their patients. Lichtenberg does not discuss these possibilities.

A subtle problem with Lichtenberg’s conclusion that we benefit from drug company innovation is that drug-company-like thinking — the notion that health problems should be “solved” with drugs — interferes with a better way of thinking: the notion that to solve a health problem, we should find out what aspects of the environment cause it. I suppose this is why we have Schools of Public Health — because this way of thinking, advocated at schools of public health, is so incompatible with what is said and done at medical schools. Public health thinking has a clear and impressive track record — for example, the disappearance of infectious disease as a major source of death. There are plenty of other examples: the drop in lung cancer after it was discovered that smoking causes lung cancer, the drop in birth defects after it was discovered that folate deficiency causes birth defects. Thinking centered on drugs has done nothing so helpful. Spending enormous amounts of money to develop new drugs shifts resources away from more cost-effective research: about environmental causes and prevention. Someone should ask the directors of the Susan K. Komen Foundation: Why “race for the cure”? Wouldn’t spending the money on prevention research save more lives?

 

Assorted Links

Thanks to Rashad Mahmood.

Two Dimensions of Economic Growth: GDP and Useful Knowledge

Ecologists understand the exploit/explore distinction. When an animal looks for food, it can either exploit (use previous knowledge of where food is) or explore (try to learn more about where food is). With ants, the difference is visible. Trail of ants to a food source: exploit. Solitary wandering ant: explore. With other animals, the difference is more subtle. You might think that when a rat presses a bar for food, that is pure exploitation. However, my colleagues and I found that when expectation of food was lower, there was more variation — more exploration — in how the rat pressed the bar. In a wide range of domains (genetics, business), less expectation of reward leads to more exploration. In business, this is a common observation. For example, yesterday I read an article about the Washington Post that said its leaders failed to explore enough because they had a false sense of security provide by their Kaplan branch. “Thanks to Kaplan, the Post Company felt less pressure to make hard strategic choices—and less pressure to venture in new directions,” wrote Sarah Ellison.

Striking the right balance between exploitation and exploration is crucial. If an animal exploits too much, it will starve when its supply of food runs out. If it explores too much, it will starve right away. Every instance of collapse in Jared Diamond’s Collapse: How Socieities Choose to Fail or Succeed was plausibly due to too much exploitation, too little exploration (which Diamond, even though he is a biologist, fails to say). I’ve posted several times about my discovery that treadmill walking made studying Chinese more pleasant. I believe walking creates a thirst for dry knowledge. My evolutionary explanation is that this pushed prehistoric humans to explore more.

I have never heard an economist make this point: the need for proper balance between exploit and explore. It is relevant in a large fraction of discussions about how to spend money. For example, yesterday I listened to the latest EconTalk podcast, a debate between Bob Frank and Russ Roberts about whether it would be a good idea for the American government to spend $2 trillion on infrastructure projects (fix bridges, etc.). Frank said it would create jobs, and so on — the usual argument. Roberts said if fixing bridges was such a good idea, why hadn’t this choice already been made? Roberts could have said, but didn’t, that massive government shovel-ready expenditures, such as $2 trillion spent on infrastructure repair, inevitably push the exploit/explore balance toward exploit, which is dangerous. This is an argument against all Keynesian stimulus-type spending. I have heard countless arguments about such spending. I have never heard it made. If you want examples of how the American economy suffers from a profound lack of useful new ideas, look at health care. As far as I know, there are no recorded instances of a society dying because of too much exploration. The problem is always too much exploitation. People at the top — with a tiny number of exceptions, such as the Basques — overestimate the stability of their position. At the end of The Economy of Cities, Jane Jacobs says that if a spaceship landed on Earth, she would want to know how their civilization avoided overexploitation. When societies exploit too much and explore too little, said Jacobs, problems (in our society, problems such as obesity, autism, autoimmune disease, etc.) stack up unsolved. Today is China’s birthday. Due to overexploitation, I believe China is in even worse economic shape than America. Ron Unz, whom I respect, misses this.

My broad point is that a lot of economic thinking, especially about growth and development, is one-dimensional (measuring primarily growth of previously existing goods and services — exploitation) when it should be two-dimensional (measuring both (a) growth of existing stuff and (b) creation of new goods and services). Exploration (successful exploration) is inevitably tiny compared to exploitation, but it is crucial there be enough of it. If there is a textbook that makes this point, I haven’t seen it. An example of getting it right is Hugh Sinclair’s excellent new book Confessions of a Microfinance Heretic (copy sent me by publisher) that debunks microcredit. Leaving aside the very high interest rates, the use of microcredit loans to buy TVs, and so on, microcredit is still a bad idea because the money is, at best, used for a business that copies an existing business. (The higher the interest rate, the less risk a loan recipient dares take.) When a new business copies an already-existing business, you are taking an existing pie (e.g., demand for milk, if the loan has been used to buy a cow and sell its milk) and dividing it into one more piece. The pie does not get bigger. As Sinclair says, the notion that dividing existing pies into more pieces will “create a poverty-free world” is, uh, not worthy of a Nobel Prize.

Sure, it’s hard to measure growth of useful knowledge. (It is perfectly possible for a company to waste its entire R&D budget.) However, I am quite sure that realism does better than make-believe — and the notion that growth of GDP is a satisfactory metric of economic growth is make-believe. If you’ve ever been sick, or gone to college, and have a sense of history, you will have noticed the profound stagnation in two unavoidable sectors (health care and education) of our economy. That are growing really fast.

“We’re Economists. And We Don’t Care About Innovation”

In a Planet Money show about whether Super Bowls help host cities, a sports economist named Victor Matheson, a professor at College of the Holy Cross, described himself and other sports economists:

We’re economists. And we’re concerned about equity and we’re concerned about efficiency. And what most economists see . . . “

He didn’t say “We’re concerned about innovation”. The way he ignores innovation reflects the whole field of economics. Here’s the same thing from Christine Romer. In an editorial about whether manufacturing deserves special treatment, she considers only productivity and equity:

It might be better to enact policies that will make all American businesses and workers more productive and successful. . . Today, we face a profound shortfall of demand. . . .We need actions that raise overall demand. [She doesn’t say we are in a period of profound stagnation in most industries, which is also true.] . . . More aggressive monetary policy that lowered the price of the dollar would stimulate all our exports . . . Moving is very costly for dislocated workers with ties to their communities. . . Manufacturing jobs are seen as one of the few sources of well-paying jobs for less-educated workers. . . . Public policy . . . should be based on hard evidence of market failures, and reliable data on the proposals’ impact on jobs and income inequality.

As if innovation (and lack of it) don’t exist. Here’s an example from Robert Reich, in a post “rebut[ing] the seven biggest economic lies”:

Shrinking government generates more jobs. Wrong again. It means fewer government workers – everyone from teachers, fire fighters, police officers, and social workers at the state and local levels to safety inspectors and military personnel at the federal. And fewer government contractors, who would employ fewer private-sector workers. According to Moody’s economist Mark Zandi (a campaign advisor to John McCain), the $61 billion in spending cuts proposed by the House GOP will cost the economy 700,000 jobs this year and next.

Nothing about the effect of shrinking government on innovation. Many types of innovation increase jobs.

This is like doctors ignoring the immune system. Ignoring the effect of this or that policy on innovation is likely to lead to decisions that reduce innovation in favor of something easier to measure or defend, such as productivity or equity. The cumulative effect of ignoring innovation is stagnation and decline, caused by problems that got worse and worse as, due to lack of innovation, they failed to be solved.

Tyler Cowen (The Great Stagnation) and Alex Tabarrok (Launching the Innovation Renaissance) are absolutely right to focus on innovation and the lack of it. The obesity epidemic is 30 years old — a good example of a problem that has gotten worse and worse. Judging by Tara Parker-Pope’s reporting, mainstream weight researchers don’t have a clue — in the form of empirical results — how to solve it. Outside mainstream academia, the dominant weight-loss idea is a low-carb diet. That idea is a hundred years old (Banting). How little innovation there has been. That Parker-Pope failed to criticize researchers for their lack of progress shows how deep the problem is. She appears not to grasp the possibility.

Assorted Links

  • A brash high-school student discovers — maybe by accident — how much famous writers, such as Ralph Ellison, Norman Mailer, and John Updike, don’t want to write. Any excuse to avoid writing will do.
  • A pretty good talk by John Cochrane, a University of Chicago professor of economics, called “Restoring Robust Economic Growth in America”. What’s most interesting is what’s missing. At one point he asks: “Why are we stagnating? I don’t know. I don’t think anyone knows, really. That’s why we’re here at this fascinating conference.” In spite of this topic, his talk contains nothing about what controls the rate of innovation. Not only does he not know anything about this (judging by this talk), he doesn’t even realize the gap in his knowledge (judging by this talk). Shades of Thomas Sargent. It’s as if a Harvard Medical School professor spoke about how to fight disease without mentioning the immune system, without even appearing to know that the immune system exists. (Which happens.)
  • Garum, a fermented fish sauce. It was the “supreme condiment” of ancient Rome.

Thanks to Allan Jackson and Peter Couvares.

Bryan Caplan Disses College

In this post, Bryan Caplan says (again) that college is vastly overrated. Like me, he says that the only thing college professors know how to do is be professors and that is all they can actually teach. Graduate school, where professors teach students who want to be professors, makes sense. Undergraduate school, where almost no students will become professors, does not. Like me, he ridicules the idea that professors teach students “how to think”.

He omits half of my criticism. It isn’t just teaching (“how to think” — please!), it’s also evaluation. Professors are terrible at evaluation. Their method of judging student work is very simple: How close is it to what I would have done? The better you can imitate the professor, no matter what the class, the higher your grade. This is one size fits all with a vengeance because there is no opting out. Sure, you can choose your major. But every class is taught by a professor. What if your strengths lie elsewhere — in something that your professors aren’t good at? Tough luck. Your strengths will never be noticed or encouraged or developed.

At Berkeley (where Bryan went and I taught) and universities generally, the highest praise is brilliant. Professor X is brilliant. Or: Brilliant piece of work. People can do great things in dozens of ways, but somehow student work is almost never judged by how beautiful, courageous, practical, good-tasting, astonishing, vivid, funny, moving, comfortable, and so on it is. Because that’s not what professors are good at. (Except in the less-academic departments, such as art and engineering.) To fail to grasp that students can excel in dozens of ways is to seriously shortchange them. To value them at much less than they are worth — and, above all, to fail to help them grow and find their place in the world after college.

At Berkeley, I figured this out in a way that a libertarian should appreciate: I gave my students much more choice. For a term project, I said they could do almost anything so long as it was off-campus and didn’t involve library work. What they chose to do revealed a lot. I began to see not just how different they were from me but how different they were from each other. One of my students chose to give a talk to a high-school class. This was astonishing because she has severe stage fright. Every step was hard. But she did it. “I learned that if I really wanted to, I could conquer my fear,” she wrote.

One of my Tsinghua students recently asked me: “Are you a brave man?” (She wanted to give me a gift of stinky tofu.) I said no. She said she thought I was brave for coming to China. Perhaps. I have never done anything as brave as what my student with stage fright did. I have never done something that terrified me — much less chosen to do such a thing. Her homework hadn’t been very good. When I read about her term project — conquering stage fright — I realized how badly I had misjudged her. How badly I had failed to appreciate her strengths. I saw that it wasn’t just her and it wasn’t just me. By imposing just one narrow way to excel, the whole system badly undervalued almost everyone. Almost everyone had strengths the system ignored. And it’s a system almost everyone must go through to reach a position of power!

This is related to what I call the hemineglect of economists — they fail to see that innovation should be half of economics. Diversity of talents and interests is central to innovation because new things are so often mixtures of old things. By rewarding only one kind of talent, colleges suppress diversity of talent and thereby reduce innovation. (It’s no coincidence that Steve Jobs, whom we associate with innovation, didn’t finish college. He saw his talents wouldn’t be valued.) Psychologists are also guilty of this. Many psychologists glorify IQ. Somehow having a high IQ is crucial to success . . . somehow a society that doesn’t encourage people with high IQs will do badly. And so on. In The Bell Curve, Herrnstein and Murray showed that high IQ scores correlated with other measures of desirable social outcomes (e.g., income — people with higher IQ scores made more money). Like many successful people, they failed to see the possibility that the whole world had been shaped to reward the things that the people in power (i.e., they themselves) are good at. Not because those talents work (= produce a better economy). But because they are easy to measure (by college grades). The glorification of IQ has had a solipsistic aspect and has ignored what should be obvious, that diversity of talents and skills promotes innovation. Without a diverse talent pool, any society will do a poor job of solving the problems that inevitably arise.

Duct Tape, the Eurozone, Status-Quo Bias, and Neglect of Innovation

In 1995, I visited my Swedish relatives. We argued about the Euro. They thought it was a good idea, I thought it had a serious weakness.

ME It ties together economies that are different.

MY AUNT It reduces the chance of war in Europe.

You could say we were both right. There have been no wars between Eurozone countries (supporting my aunt) and the Eurozone is now on the verge of breaking apart for exactly the reason I and many others pointed out (supporting me).

Last week a friend said to me that Europe was in worse shape than America. I was unconvinced. I said that I opposed Geithner’s “duct-tape solution”. It would have been better to let things fall apart and then put them back together in a safer way.

MY FRIEND Duct-tape works.

ME What Geithner did helped those who benefit from the status quo and hurt those who benefit from change. Just like duct tape.

This struck me as utterly banal until I read a one-sided editorial in The Economist:

The consequences of the euro’s destruction are so catastrophic that no sensible policymaker could stand by and let it happen. . . . the threat of a disaster . . . can anything be done to avert disaster?

and similar remarks in The New Yorker (James Surowiecki):

The financial crisis in Europe . . . has now entered a potentially disastrous phase.. . . with dire consequences not just for Europe but also for the rest of us. . . . This is that rarest of problems—one that you really can solve just by throwing money at it [= duct tape]

Wait a sec. What if the Eurozone is a bad idea? Like I (and many others) said in 1995? Why perpetuate a bad idea? Why drive further in the wrong direction? Sure, the dissolution will bring temporary trouble (“disaster”, “dire consequences”), but that will be a small price to pay for getting rid of a bad idea. Of course the Euro had/has pluses and minuses. Anyone who claimed to know that the pluses outweighed the minuses (or vice-verse) was a fool or an expert. Now we know more. Given that what the nay-sayers said has come to pass, it is reasonable to think that they (or we) were right: The minuses outweigh the pluses.

You have seen the phrase Japan’s lost decade a thousand times. You have never seen the phrase Greece’s lost decade. But Greeks lost an enormous amount from being able to borrow money for stupid conventional projects at too low a rate. Had loans been less available, they would have been more original (the less debt involved, the easier it is to take risks) and started at a smaller scale. Which I believe would have been a better use of their time and led to more innovation. Both The Economist‘s editorial writer and Surowiecki have a status-quo “duct-tape” bias without realizing it.

What’s important here is not what two writers, however influential their magazines, think or fail to think. It is that they are so sure of themselves. They fail to take seriously an alternative (breakup of the Eurozone would in the long run be a good thing) that has at least as much to recommend it as what they are sure of (the breakup would be a “disaster”). I believe they are so sure of themselves because they have absorbed (and now imitate) the hemineglect of modern economics. The whole field, they haven’t noticed, has an enormous status-quo bias in its failure to study innovation. Innovation — how new goods and services are invented and prosper — should be half the field. Let me repeat: A few years ago I picked up an 800-page introductory economics textbook. It had one page (one worthless page) on innovation. In this staggering neglect, it reflected the entire field. The hemineglect of economics professors is just as bad as the hemineglect of epidemiologists (who ignore immune function, study of what makes us better or worse at fighting off microbes) and statisticians (who pay almost no attention to idea generation).

MORE Even Joe Nocera, whom I like, has trouble grasping that the Euro might be a bad idea. “The only thing that should matter is what works,” he writes. Not managing to see that the Euro isn’t working.