IDEO Visits a Hospital

Science is a form of systematic innovation, right? A particular way of learning more about the world. The design firm IDEO has a systematic way of coming up with new designs, illustrated in this hospital visit. My guess is that IDEO and science don’t have much in common in spite of the surface similarity. The product design done by IDEO is a form of engineering. Science and engineering are like two phases in the lives of ants: random search (science) and path following (engineering).

When I co-taught a course about office design, we visited IDEO (in Palo Alto). Most of their work was contract (design a new X for Client Y) but they also had a small group of toy designers who came up with new toys on spec. Their mail room was lush, with magazines, food, and a TV. Its purpose, said the CEO, was to cause people to interact. It was the big shady tree of an African village.

Where Do Useful Discoveries Come From?

From Andrew Gelman’s blog:

On page xxi [of Nassim Taleb’s new book The Black Swan], Taleb says how almost no great discovery came from design and planning.

I said something similar to a graduate student last week: Really useful discoveries are almost never the result of trying to do something useful; they are almost always due to accidents. Penicillin, for example. If you notice something by accident, it must be a big effect otherwise you wouldn’t have noticed it. That’s a great place to start: A big effect you didn’t know about.

I’ll have to see what else The Black Swan says about this. It makes self-experimentation look really good: (a) It’s much easier to to do a self-experiment than to do a conventional experiment so there is more chance of accidents; and (b) because we pay close attention to ourselves, it’s much easier to notice the unexpected with self-experimentation than with conventional research. Every useful finding in my long self-experimentation paper — breakfast, morning faces, standing, morning light, sugar water — came from an accidental discovery. In four of the five cases, the accident happened during a self-experiment; I varied something to see if X would change and noticed that Y changed. The exception was sugar water, whose appetite-suppressing effects I noticed while traveling. Hmm. Maybe travel is a type of self-experimentation. Or self-experimentation a type of travel. Certainly they are closely related.

Does Omega-3 Affect the Brain?

The last three data sets I’ve posted — one from Tim Lundeen, two from me (here and here) — provide evidence that omega-3 affects the brain. The evidence has several good features:

1. Two people.

2. Three tasks.

3. Two ways of varying omega-3.

4. Strong effects (that is, large t values).

5. Easy to obtain.

Does omega-3 affect the brain? This is a good place to start a research project because there is a reasonable chance the answer to the question “does omega-3 affect the brain?” is yes.

The placebo/expectations explanation — which, based on the lack of effect of placebos in most studies, is implausible to begin with — has trouble with several facts: 1. The initial discovery was a surprise. 2. Tim’s results involved comparison of two plausible doses. 3. Tim had earlier found that dose increases had no effect. 4. Tim’s results had a pattern I have never seen (and thus Tim couldn’t have expected). 5. My results had two different time courses.

Even more interesting than the idea that how much omega-3 we eat might affect how well our brains work are two more subtle ideas that are also becoming plausible: (a) the average diet (very low in omega-3) is very suboptimal and (b) improvement can be noticed quickly and easily.

In the latest U.S. government nutrition guidelines, there is no omega-3 requirement.

Directory of my omega-3 posts.

If Science Had Been Invented More Than Once

Last night, at a Vietnamese restaurant, I had an avocado shake for dessert. On the way home I stopped at a Chinese bakery and got garlic pork cookies. Had science, like cooking, been invented more than once, what would other scientific traditions — other ways of doing science — look like? My guess is they would not include:

1. Treating results with p = 0.04 quite differently than results with p = 0.06. Use of an arbitrary dividing line (p = 0.05) makes little sense.

2. Departments of Statistics. Departments of Scientific Tools, yes; but to put all one’s resources into figuring out how to analyze data and none into figuring out how to collect it is unwise. The misallocation is even worse because most of the effort in a statistics department goes into figuring out how to test ideas; little goes into figuring out how to generate ideas. In other words, almost all the resources go toward solving one-quarter of the problem.

3. Passive acceptance of a negative bias. The average scientist thinks it is better to be negative (”skeptical”) than positive when reacting to other people’s work. What is the positive equivalent of skeptical — a word that means appreciative in a “good” way? (Just as skeptical means disbelieving in a “good” way.) There isn’t one. However, there’s gullible, further showing the bias. Is there a word that means too skeptical, just as gullible means too accepting? Nope. The overall negative bias is (male) human nature, I believe; it’s the absence of attempts to overcome the bias that is cultural. I used to subscribe to the newsletter of the Center For Science in the Public Interest (CSPI). I stopped after I read an article about selenium that had been prompted by a new study showing that selenium supplements reduced the rate of some cancer (skin cancer?). In the newsletter article, someone at CSPI pointed out some flaws in the study. Other data supported the idea that selenium reduces cancer (and showed that the supposed flaws didn’t matter), but that was never mentioned; the new study was discussed as if it were the only one. Apparently the CSPI expert didn’t know about the other data and couldn’t be bothered to find out. And the CSPI writer saw nothing wrong with that. Yet that’s the essence of figuring out what’s good about a study: Figuring out what it adds to previous work.

My earlier post about another bit of scientific culture: the claim that “correlation does not equal causation.”

Too Much Emphasis on Failure

In his blog a few days ago, as I mentioned earlier, Nicholas Kristof printed a letter from a University of North Carolina graduate student about why she was not going to enter Kristof’s contest to go to Africa with him. Kristof wrote too much about failure, she said:

[Quoting Kristof:]“I’m hoping that you’ll be changed when you see a boy dying of malaria because his parents couldn’t afford a $5 mosquito net, or when you talk to a smart girl who is at the top of her class but is forced to drop out of school because she can’t afford a school uniform.” . . . The story of Africa in turmoil is the African narrative that many Americans – and certainly those who read The New York Times – already know. It is virtually the only type of reporting that Western news outlets broadcast about the continent. . . Americans don’t need any more stories of a dying Africa. Instead, we should learn of a living one. Kristof and his winners should investigate how it is that Botswana had the highest per-capita growth of any country in the world for the last 30 years of the twenty-first century.

I believe she is correct. The Times and — I’ll take her word for it — “Western news outlets” in general have made a serious mistake in their Africa coverage: Far too much coverage of failure relative to success. An especially curious misjudgment because generally journalists like feel-good stories.

Could an entire well-respected profession do the wrong thing for a long time? Well, Jane Jacobs thinks so. In a 2000 interview, she said this about economists:

One place where past economic theory has gone wrong in a subtle way is that it has always been called upon for explanations of breakdowns and trouble. Look how foreign aid, even today, is all about poverty and where things are not working. There is no focus on trying to learn how things are working when they work. And if you are going to get a good theory about how things work, you have to focus on how they work, not on how they break down. You can look forever at a broken down wagon or airplane and not learn what it did when it was working.

Maybe you say Jacobs wasn’t a real economist (because she didn’t write mainstream academic papers). Well, consider this. In the 1960s, Saul Sternberg changed the face of experimental psychology when he showed what could be done with reaction-time experiments, which are set up so that the subject almost always gets the right answer. Before Sternberg, memory and perception were usually studied via percent-correct experiments, set up so that subjects were often wrong.

Sternberg’s reaction-time research was so much more revealing than the percent-correct research that preceded it that almost everyone switched to using reaction time. The profession of experimental psychologists had done the wrong thing for a long time.

Omega-3 and Arithmetic (several analyses)

In a recent post I described Tim Lundeen’s arithmetic data. He found that increasing his daily dose of DHA seemed to increase the speed at which he did simple arithmetic. Here is the graph:

Tim Lundeen's arithmetic data

I didn’t bother to do any statistical tests because I thought the DHA effect was obvious. However, someone in the comments said it wasn’t obvious to them. Fair enough.

If DHA has no effect, then the scores with more DHA should be the same as the just-preceding scores with less DHA. There are practice effects, of course, so I analyzed the data after practice stopped having an effect: After about Day 40. (And I left out days preceded by a gap in testing — e.g., a day preceded by a week off.) Thousands of learning experiments have found that practice makes a difference at first and then the effect goes away — additional practice doesn’t change behavior.

If I do a t-test comparing low-DHA days (after Day 40) with high-DHA days, I get a huge t value — about 9. If you’re familiar with real-life t values, I’m sure you’ll agree that’s a staggeringly high value for a non-trivial effect. The model corresponding to this test is indicated by the lines in this figure:

Tim Lundeen's data

The red (”more DHA”) points don’t fit the line very well, which suggests doing an analysis where the slopes can vary:

Tim Lundeen's arithmetic data

There is still a huge effect of DHA, now split between two terms in the model — a difference-in-level term (t = 4) and a difference-in-slope term (t = 3).

But this analysis can be improved because based on thousands of experiments I don’t believe that the less-DHA line could have a positive slope, as it does in the model. Or at least I believe that is very unlikely. So I will constrain the less-DHA line to have a slope of zero:

Tim Lundeen's arithmetic data

Now I get t = 8 for the difference in slopes and t = 4 for the difference in level. This is interesting because it implies that more DHA not only caused immediate improvement but also opened the door to more gradual improvement (indicated by the slope difference). DHA changed something that allowed practice to have more effect.

That’s a new way of thinking about the effects of omega-3 — actually, I have never seen any data with the feature that a treatment caused a practice effect to resume — so I have to thank the person who claimed the difference wasn’t obvious.

Omega-3 and Arithmetic (evaluation)

When I read an empirical scientific paper I ask four main questions:

1. How clear is the effect or correlation? Generally measured by p values.

2. How clear is the cause of the effect?

3. How much can we generalize from this?

4. Assuming we can generalize, how much will this change what anyone does?

The overall value is something like the product of the answers. Most research gets a modest score on #1 (because a high score would be overkill and, anyway, the low-hanging fruit has been picked) and a low score on #4. Experiments get a high score on #2, surveys a low score.

Tim Lundeen’s little experiment that I described a few days ago, in which he found that a higher dose of DHA improved his arithmetic ability, gets a very high score:

1. The effect is very clear.

2. It’s an experiment. Because the variation was between two plausible doses of a food supplement, I doubt it’s a placebo effect.

3. The subject, the treatment, and the test are “ordinary” — e.g., Tim does not fall into a special group that might make him more likely to respond to the treatment.

4. Who wouldn’t want to improve how well their brain works?

From the point of view of a nutrition scientist, I’d guess, the effect is shockingly clear and direct. Experimental nutrition with humans almost always measures correlates of disease (e.g., correlates of heart disease) rather than disease. To me, an experimental psychologist, the results are shockingly useful. Practically all experimental psychology results (including mine) have little use to most people. The clarity of the effect does not quite shock me but I’m very impressed.

Birth of a Website

Several months ago I got this email from someone at the Seed Media Group:

Thank you for your interest in being hosted by ScienceBlogs. In the last couple of months, we have received well over a hundred queries from bloggers representing an impressive breadth and depth of science
expertise. However, as we are trying to maintain a sense of community at ScienceBlogs, we are able to extend only a small number of invitations at a time. . . . In light of the very limited number of spaces we have to offer, we regret to inform you that we cannot extend you an invitation at this time.

This was sent to about 50 people. Their email addresses were visible. One of the recipients thought that we, the rejectees, could form our own umbrella website and wrote to us about this. I replied:

I love the idea of a form rejection letter leading to the founding of a competitive website — count me in!

Four months later I got an invitation to join the result, www.scientificblogging.com. It is now a well-functioning website with lots of interesting stuff.

Does the Type of Fat in Your Diet Affect Your Brain?

Here’s how a Ph.D. student at UC San Francisco doing research on neural stem cells answered that question:

If dietary fat affected the brain in a significant way, we would know about it. It would have been discovered. Which isn’t to say it doesn’t affect it in a trivial way. Not just an acute action — like if you drink a small amount of alcohol or the effect of a sugar high. I mean the long-term functioning.

Why?

Because a lot of people would have tested it. Fat gets a lot of money. It’s a crowded area of research. People try to exploit it. It’s an area with a lot of public interest. It’s very popular to study anything related to fat. Also anything related to the brain. People are worried that they will lose their mind as they age. If there was something significant found it would have been a big story all over the New York Times.

She was very sure of this, it seemed to me.

My main posts about omega-3.

Durian and SLD

The obvious connection between durian, the big smelly spiky Asian fruit, and the Shangri-La Diet is that both rely on flavor-calorie learning. We come to like the initially unpleasant smell and flavor of durian because we learn to associate it with the calories in the fruit. Here’s what happens:

“To anyone who doesn’t like durian it smells like a bunch of dead cats,” said Bob Halliday, a food writer in based Bangkok. “But as you get to appreciate durian, the smell is not offensive at all. It’s attractive.

From an article in today’s NY Times. The theory that led me to the SLD centers on flavor-calorie learning.

A less obvious connection is a principle that helped me discover that drinking sugar water causes weight loss. I was in Paris and lost my appetite — a rare event. The principle is that rare events are usually due to rare events. So I wondered what else unusual had happened. Well, there was something: I had been drinking several sugar-sweetened unfamiliar soft drinks per day. When I got back to Berkeley I started to test the possibility that sugar-sweetened water can cause weight loss and SLD was born.

For a fruit, durian has three rare properties:

    1. very strong, unpleasant smell
    2. very big
    3. hard to handle (because spiky)

Following the Rare-Causes-Rare principle, these should have a common explanation. Lightning does not strike thrice in one place for different reasons. According to Wikipedia,

The thorny armored covering of the fruit may have evolved because it discourages smaller animals, since larger animals are more likely to transport the seeds far from the parent tree.

That’s a good explanation of #3 and it explains the other two rare features (#1 and #2) as well. The reason for the strong smell (#1) is so that the signal will be broadcast a long distance: Large animals are less dense than small animals. We think of the smell of ripe durian as very unpleasant but perhaps almost all unfamiliar smells are unpleasant; so any random strong smell will seem very unpleasant. Big fruit (#2) means big tree and big tree means that seeds must be carried far away so as to be placed in soil where they will not compete with the mother tree. Coconuts are big and hard to eat. Pineapples are big and spiky.

The Rare-Causes-Rare principle also helped me discover the effect of morning faces on my mood and the effect of omega-3 on my balance.