Journal of Personal Science: Omega-3 and ADHD (Part 2 of 2)


by Allan Folz

My story of omega 3 and self-experimentation did not end with my wife and her pregnancy. As I mentioned, I discovered the paleo diet, Vitamin D, and fish oil all about the same time. Mostly for reasons of general good health we began supplementing with vitamin D and fish oil (Mega-EPA Omega-3 supplement). I ordered some of each from the same place online and we began supplementing both at the same time, around January-February of 2010.

At the time my son was in kindergarten and having problems socializing at school. He had them at home too, but we’d all adjusted to them at home. He exhibited a lot of what would be called typical spectrum issues, though I was certain he didn’t have anything approaching Asperger’s. Things that interested him, such as building with Legos playing outside with or without friends, he did quite well. It struck me that he was a high-energy boy who didn’t appreciate receiving directions, desk work, or anything requiring moderation and self-reflection. I like to joke that Tom Sawyer is hardly a modern archetype.

Nonetheless, he was having problems. The Vitamin D Council web site had a number of very persuasive anecdotes from parents about autistic children cured by Vitamin D. Our son wasn’t autistic, but autism involves several behaviors, and he had a few of them. He didn’t make good eye contact when talking or being talked to. He wouldn’t follow directions if he didn’t feel an intrinsic motivation to follow them. He could not fall asleep and would often lay in bed restless for an hour or more at night. The Vitamin D Council recommended 2000 IU per 50 lb/day, so that’s what we all took. We also took one fish oil capsule a week. At the time I thought of omega-3 only being for heart health. This made me a little skeptical about how much was really required. We seemed a healthy family, so I figured our needs were modest. One capsule a week seemed well beyond the norm so we should be good.

Almost immediately after beginning the supplements my son’s behavior improved. I was pleasantly surprised and attributed it to the Vitamin D based on what I’d read on the Vitamin D Council web site. It wasn’t a cure by any means, but it was a very noticeable improvement. He would still have bad days, and I was a little bummed that after the initial improvement the Vitamin D didn’t seem to be helping any further. However, I figured such is real-life outside of attention-grabbing headlines.

Two years later, January and February of 2012, second grade for him, and about a year and a half after the self-experiment with my wife during her third pregnancy, my son’s behavior dramatically worsened. We were all still taking D, but at that point it was obviously not showing any benefit for my son. He was in a worse place than when he was in kindergarten. I resigned myself to Vitamin D not being his problem, and at his teacher’s demand signed him up for outside testing.

I didn’t notice at the time, but we had run out of the “fish oil” over Christmas break. The second week of January we visited family in the Midwest. When we returned, school was a nightmare for him and us. My wife and I attributed it to too much TV, bad diet, and not enough sleep while we were visiting family. However, even two and three weeks after our return his problems were worsening. Around the beginning of February, I finally got around to ordering another bottle of the omega-3. I thought of it as mostly being for my wife, who was doing fine, so I didn’t feel any immediacy. When it finally arrived, we all started taking it again. Immediately his behavior improved. It was such a night and day difference the connection was impossible to miss. It was like kindergarten when he first started taking Vitamin D, only far more so. For the first time in two weeks he wasn’t angry and crying at the end of the day. That’s when it occurred to me that in kindergarten he started taking fish oil at the same time as Vitamin D. For the last two years I had been attributing to Vitamin D what was due to the omega-3 supplement. I felt like an idiot.

After that, I did some research on omega-3, fish oil, and ADHD. When I knew what to look for, I found that there were, in fact, a few studies about using omega-3 for ADHD treatment. It seemed that EPA was effectie while DHA was not, or at best, less effective than EPA. When I took a closer look at our “fish oil,” I remember thinking to myself, “Oh wow, this stuff is Mega-EPA. How lucky is that.” I had chosen it almost at random. It had the best per-dose price and was listed as a top seller.

In retrospect, were it not for the pain and difficulty experienced by my son, it would be funny how the answer was under my nose the whole time. I was slow to appreciate it because of my own prejudice and not treating the problem as something to scientifically test. I thought of omega-3 as being for heart health. I’d never seen it mentioned in relation to emotional health or brain development, outside of the usual bromides about eating walnuts and so forth. Plus the recommendations are always couched in generalities without specific dosage guidelines. Even after I discovered it made a difference for my pregnant wife, it didn’t occur to me to test it seriously on my son. Their symptoms and nutritional needs seemed unrelated.

A few weeks later we saw the professional who had tested our son to go over the results. A few weeks had passed between the evaluation and when we met to discuss the results; it was during that time that I made the omega-3 discovery. I told the professional that our son was getting really good results from the omega-3 supplement. I said that after noticing his results I’d done some online searching and there were a few scientific studies supporting the use of omega-3 supplements for ADHD. The professional said he was aware of the studies, but the efficacy wasn’t as certain or as strong compared to the prescription drugs so most people choose the prescriptions. (He sent me the same Bloch & Qawasmi paper Seth linked to in his April 21 Assorted Links.) I wondered if most people were even made aware of the possibility of omega-3 deficiency — he certainly didn’t bring it up with us. I would not have found the research papers without first knowing what to look for. I knew what to look for only because of the discovery I made with my son.

The omega-3 supplement, while a huge improvement, was not an immediate cure. We started giving him two capsules daily which consisted of 800 mg EPA and 400 mg DHA. That seemed to me a lot of omega-3, relative to what one could consume through normal dietary intake.

I was not overly comfortable with that level of dose long-term despite it clearly working. So every couple months or so I’d have him skip a day or whole weekend. Without fail, his mood noticeably worsened. By the early evening he would be overwhelmed and frustrated to the point of tears by little things that weren’t going his way, things that were really just the usual complications of life in a household with two parents and two siblings.

A poignant instance of the effects of missing a dose happened in the Fall of the following school year, still 2012. My wife’s mother came for a visit. The break from routine caused my wife to forget to give our son his omega-3 supplement for three or four days in a row. He might have had them on Sunday, but not on any of the weekdays. By Thursday I had gotten a note and phone call from his teacher about his behavior at school. We had to go and meet with her the following week. At the meeting I shared that we had forgotten to give him the omega-3 capsules due to his grandmother visiting. I saw this as proof it was working. The teacher didn’t know we had forgotten, and yet his behavior had noticeably regressed. She did not share my awe, and tried to imply that he should be on a prescription. I said that kids can forget prescriptions just as easily and the side effects from a missed prescription are going to be far worse than three days off an omega-3 supplement.

Last month we again ran out of the omega-3 supplement. Except for the accidental occurrence when my wife’s mother was visiting, this is the first time he’s been off it for more than a few consecutive days in the two years and two months since I first discovered it helped him. I’m quite pleased that he seems to be doing OK. There’s been virtually no difference in his behavior since stopping. However, it’s not a true cold-turkey quit. We have some of the Green Pastures FCLO infused coconut oil, so he’s been taking that instead. The manufacturer is vague about its omega-3 content, but my rough estimate is that he’s taking, a third to a half of his previous dose with the Mega-EPA capsules. Then again, it’s in the triglyceride form which is supposed to be 50-70% better absorbed on a per-gram basis. Perhaps it’s a wash.

I’ve thought about having him try flax oil. There is considerable debate about the efficacy of flax oil and the body’s ability to synthesize EPA and DHA from ALA, the omega-3 in flax oil. It might be a little late to test efficacy now. The best time to test was one and two years ago when the Mega-EPA supplement was clearly working and had an “efficacy” half-life of 24 hours. The thought never occurred to me until recently when reading Seth’s blog.

I can’t end without sharing some of my frustrations with the state of health science. There is no doubt in my mind that omega-3 helped both my son and my wife deal with some severe and yet common mental health problems. I’m a pretty sharp, pretty well-read guy that’s always had an interest in biology and medicine. Outside of a few esoteric corners of the web where you have to know what you’re looking for in order to find it, omega-3 is something you take for heart health.

I think the comparison with statins is apt. When “heart-healthy whole-grains” don’t fix one’s blood makers, and why would they, it’s very quickly on to prescription drugs (statins). When “use your words” doesn’t fix a young boy’s interactions with classmates and teachers, and why would it, it’s on to prescription drugs. Boys especially are put on incredibly strong pharmaceuticals with well-established risk factors that include stunted growth and suicide. Pharmaceuticals should be tried last, but they are clearly being tried first by frustrated parents and suspect practitioners. It’s a national shame and a personal outrage.

Part 1, about using omega-3 to treat postpartum depression, appeared yesterday. Allan Folz is a software developer in Portland, Oregon. He recently co-founded Edison Gauss Publishing, a software house that makes academically rigorous educational apps for children in grades K-8. Their apps are suitable both classroom and home use, and have proven to be particularly popular among homeschoolers that appreciate a traditional approach to practicing math.

Lyme Disease and Bad Medicine

I got Cure Unknown: Inside the Lyme Epidemic (2008) by Pamela Weintraub from the library and found something surprising: an angry foreword. Weintraub is a science journalist; the foreword is by Hillary Johnson, another science journalist and apparently a friend of Weintraub’s.

In her anger, Johnson says several things I say on this blog.

The more Weintraub investigated, the more virtually everyone with a shred of authority was losing their credibility. . . The so-called “objective” scientists were sending an entire disease down the river and over the cliff [meaning they ignored it] for reasons that seemed frequently to have more to do with mere opinion and crass external forces — cash, prestige, careerism — than with scientific erudition.

She rejected the science writer’s inbred habit of relying on the government official with the highest pay grade or the scientist with a job at Harvard as the final word on a topic. . . . I think of her, with enormous respect, as a “recovered” science journalist.

As one who also suffers from the disease I chronicled with kindred passion in Osler’s Web, I sometimes wonder if the only investigative writers who will possess the necessary temerity to remove the white gloves and tackle those putative experts to the ground will be those . . whose personal experience demands they follow the rocky trail that leads to the truth.

The last point is the most important, I think. You can curse the careerism of Harvard medical school professors and the servility of science writers but that does nothing, or not much, and what you are upset about (careerism and servility) is unsurprising. Less obvious, at least to me, is that there is a way to overcome the careerism and servility. It still surprises me that I was able to figure out interesting stuff about sleep, obesity, depression and so on that the experts in these fields hadn’t figured out — and that sometimes contradicted what they said. (For example, I found sugar isn’t fattening.) As Johnson says, one reason I was able to learn so much was that I wanted to sleep better, lose weight, be in a better mood, and so on. Unlike the experts in those fields, for whom research was a job.

Saturated Fat and Heart Attacks

After I discovered that butter made me faster at arithmetic, I started eating half a stick (66 g) of butter per day. After a talk about it, a cardiologist in the audience said I was killing myself. I said that the evidence that butter improved my brain function was much clearer than the evidence that butter causes heart disease. The cardiologist couldn’t debate this; he seemed to have no idea of the evidence.

Shortly before I discovered the butter/arithmetic connection, I had a heart scan (a tomographic x-ray) from which is computed an Agaston score, a measure of calcification of your blood vessels. The Agaston score is a good predictor of whether you will have a heart attack. The higher your score, the greater the probability. My score put me close to the median for my age. A year later — after eating lots of butter every day during that year — I got a second scan. Most people get about 25% worse each year. My second scan showed regression (= improvement). It was 40% better (less) than expected (a 25% increase). A big increase in butter consumption was the only aspect of my diet that I consciously changed between Scan 1 and Scan 2.

The improvement I observed, however surprising, was consistent with a 2004 study that measured narrowing of the arteries as a function of diet. About 200 women were studied for three years. There were three main findings. 1. The more saturated fat, the less narrowing. Women in the highest quartile of saturated fat intake didn’t have, on average, any narrowing. 2. The more polyunsaturated fat, the more narrowing. 3. The more carbohydrate, the more narrowing. Of all the nutrients examined, only saturated fat clearly reduced narrowing. Exactly the opposite of what we’ve been told.

As this article explains, the original idea that fat causes heart disease came from Ancel Keys, who omitted most of the available data from his data set. When all the data were considered, there was no connection between fat intake and heart disease. There has never been convincing evidence that saturated fat causes heart disease, but somehow this hasn’t stopped the vast majority of doctors and nutrition experts from repeating what they’ve been told.

The Emperor’s New Clothes and the New York Times Paywall

A few years ago I blogged about three books I called The Emperor’s New Clothes trilogy. Each book described a situation in which, from a certain point of view, powerful people — our supposed leaders — “walked around naked”, that is, did things absurd to the naked eye, like the Emperor in the story. As in the story, many people, including experts, said nothing.

After reading about the fate of the Washington Post, I thought of the New York Times paywall, which can be avoided (i.e., defeated) by using what Chrome calls “incognito mode”. (Firefox has a similar mode.) I didn’t know this until recently; some of my friends didn’t know it. One of them carefully rationed the Times articles she read. I wonder how the long the ignorance will last. The Times is an extremely important institution. In the many long discussions at the Times about the paywall, no one mentioned this?

“A Debt-Ceiling Breach Would be Very, Very, Very Bad”

At the end of an article by Kevin Roose in New York about the effects of a debt-ceiling breach:

The bottom line: A debt-ceiling breach would be very, very, very bad.

Keep in mind that these are all hypothetical scenarios. Reality could be better, or much worse. The truth is that while we sort of know what a government shutdown would look like (since it’s happened in the past), we have no idea what chaos a debt-ceiling breach could bring. If, in a month, we reach the X Date, run out of money, and are stuck in political stalemate, we’ll be entering truly uncharted waters. And we’ll be dealing our already-fragile economy what could amount to a knockout blow.

This is an example of something common: Someone who has never correctly predicted anything (in this case, Roose) telling the rest of us what will happen with certainty. If Roose is repeating what experts told him, he should have said who, and their track record. Roose is far from the only person making scary predictions without any evidence he can do better than chance. Here is another example by Derek Thompson in The Atlantic.

The same thing happens with climate change, except that it is models, not people, making predictions. Models that have never predicted climate correctly — for example, none predicted the current pause in warming — are assumed to predict climate correctly. We are supposed to be really alarmed by their predictions. This makes no sense, but there it is. Hal Pashler and I wrote about this problem in psychology.

A third example is the 2008 financial crisis. People who failed to predict the crisis were put in charge of fixing it. By failing to predict the crisis, they showed they didn’t understand what caused it. It is transparently unwise to have your car fixed by someone who doesn’t understand how cars work, but that’s what happened. Only Nassim Taleb seems to have emphasized this. We expect scary predictions based on nothing from religious leaders — that’s where the word apocalypse comes from. From journalists and the experts they rely on, not so attractive.

I don’t know what will happen if there is a debt-ceiling breach. But at least I don’t claim to (“very very very bad”). And at least I am aware of a possibility that Roose (and presumably the experts he consulted) don’t seem to have thought of. A system is badly designed if a relatively-likely event (debt-ceiling breach) can cause disaster — as Roose claims. The apocalyptic possibilities give those in control of whether that event happens (e.g., Republican leaders in Congress) too much power — the power to scare credulous people. If there is a breach, we will find out what happens. If a poorly-built system falls down, it will be much easier to build a better one. Roose and other doom-sayers fail to see there are plausible arguments on both sides.

“Science is the Belief in the Ignorance of Experts” — Richard Feynman

“Science is the belief in the ignorance of experts,” said the physicist Richard Feynman in a 1966 talk to high-school science teachers. I think he meant science is the belief in the fallibility of experts. In the talk, he says science education should be about data – how to gather data to test ideas and get new ideas — not about conclusions (“the earth revolves around the sun”). And it should be about pointing out that experts are often wrong. I agree with all this.

However, I think the underlying idea — what Feynman seems to be saying — is simply wrong. Did Darwin come up with his ideas because he believed experts (the Pope?) were wrong? Of course not. Did Mendel do his pea experiments because he didn’t trust experts? Again, of course not. Darwin and Mendel’s work showed that the experts were wrong but that’s not why they did it. Nor do scientists today do their work for that reason. Scientists are themselves experts. Do they do science to reveal their own ignorance? No, that’s blatantly wrong. If science is the belief in the ignorance of experts, and X is the belief in the ignorance of scientists, what is X? Our entire economy is based on expertise. I buy my car from experts in making cars, buy my bread from bread-making experts, and so on. The success of our economy teaches us we can rely on experts. Why should high-school science teachers say otherwise? If we can rely on experts, and science rests on the assumption that we can’t, why do we need scientists? Is Feynman saying experts are wrong 1% of the time, and that’s why we need science?

I think what Feynman actually meant (but didn’t say clearly) is science protects us against self-serving experts. If you want to talk about the protection-against-experts function of science, the heart of the matter isn’t that experts are ignorant or fallible. It is that experts, including scientists, are self-serving. The less certainty in an area, the more experts in that area slant or distort the truth to benefit themselves. They exaggerate their understanding, for instance. A drug company understates bad side effects. (Calling this “ignorance” is too kind.) This is common, non-obvious, and worth teaching high-school students. Science journalists, who are grown ups and should know better, often completely ignore this. So do other journalists. Science (data collection) is unexpectedly powerful because experts are wrong more often than a naive person would guess. The simplest data collection is to ask for an example.

When Genius by James Gleick (a biography of Feynman) was published, I said it should have been titled Genius Manqué. This puzzled my friends. Feynman was a genius, I said, but lots of geniuses have had a bigger effect on the world. I heard Feynman himself describe how he came to invent Feynman diagrams. One day, when he was a graduate student. his advisor, John Wheeler, phoned him. “Dick,” he said, “do you know why all electrons have the same charge? Because they’re the same electron.” One electron moves forward and backward in time creating all the electrons we observe. Feynman diagrams came from this idea. The Feynman Lectures on Physics were a big improvement over standard physics books — more emotional, more vivid, more thought-provoking — but contain far too little about data, in my opinion. Feynman failed to do what he told high school teachers to do.

Progress in Psychiatry and Psychotherapy: The Half-Full Glass

Here is an excellent introduction to cognitive-behavioral therapy (CBT) for depression, centering on a Stanford psychiatrist named David Burns. I was especially interested in this:

[Burns] currently draws from at least 15 schools of therapy, calling his methodology TEAM—for testing, empathy, agenda setting and methods. . . . Testing means requiring that patients complete a short mood survey before and after each therapy session. In Chicago, Burns asks how many of the therapists [in the audience] do this. Only three [out of 100] raise their hands. Then how can they know if their patients are making progress? Burns asks. How would they feel if their own doctors didn’t take their blood pressure during each check-up?

Burns says that in the 1970s at Penn [where he learned about CBT], “They didn’t measure because there was no expectation that there would be a significant change in a single session or even over a course of months.” Forty years later, it’s shocking that so little attention is paid to measuring whether therapy makes a difference. . . ”Therapists falsely believe that their impression or gut instinct about what the patient is feeling is accurate,” says May [a Stanford-educated Bay Area psychiatrist], when in fact their accuracy is very low.

When I was a graduate student, I started measuring my acne. One day I told my dermatologist what I’d found. “Why did you do that?” he asked. He really didn’t know. Many years later, an influential psychiatrist — Burns, whose Feeling Good book, a popularization of CBT, has sold millions of copies — tells therapists to give patients a mood survey. That’s progress.

But it is also a testament to the backward thinking of doctors and therapists that Burns didn’t tell his audience:

–have patients fill out a mood survey every day
–graph the results

Even more advanced:

–use the mood scores to measure the effects of different treatments

Three cheap safe things. It is obvious they would help patients. Apparently Burns doesn’t do these things with his own patients, even though his own therapy (TEAM) stresses “testing” and “methods”. It’s 2013. Not only do psychiatrists and therapists not do these things, they don’t even think of doing them. I seem to be the first to suggest them.

Thanks to Alex Chernavsky.

Assorted Links

Thanks to Jeff Winkler and Tom George.

What Goes Unsaid: Self-Serving Health Research

“The realization that the world is often quite different from what is presented in our leading newspapers and magazines is not an easy conclusion for most educated Americans to accept,” writes Ron Unz. He’s right. He provides several examples of the difference between reality and what we are told. In finance, there are Bernie Madoff and Enron. Huge frauds are supposed to be detected. In geopolitics, there is the Iraq War. Saddam Hussein’s Baathists and al-Quada were enemies. Invading Iraq because of 9/11 made as much sense as attacking “China in retaliation for Pearl Harbor” — a point rarely made before the war. In these cases, the national media wasn’t factually wrong. No one said Madoff wasn’t running a Ponzi scheme. The problem is that something important wasn’t said. No one said Madoff was running a Ponzi scheme.

This is how the best journalists (e.g., at The New Yorker and the New York Times) get it wrong — so wrong that “best” may be the wrong word. In the case of health, what is omitted from the usual coverage has great consequences. Health journalists fail to point out the self-serving nature of health research, the way it helps researchers at the expense of the rest of us.

The recent Health issue of the New York Times Magazine has an example. An article by Peggy Orenstein about breast cancer, meant to be critical of current practice, goes on and on about how screening has not had the promised payoff. As has been widely noted. What Orenstein fails to understand is that the total emphasis on screening was a terrible mistake to begin with. Before screening was tried, it was hard to know whether it would fail or succeed; it was worth trying, absolutely. But it was always entirely possible that it would fail — as it has. A better research program would have split the funds 50/50 between screening and lifestyle-focused prevention research.

The United States has the highest breast cancer incidence (age-adjusted) rates in the world — about 120 per 100,000 women, in contrast to 20-30 per 100,000 women in poor countries. This implies that lifestyle changes can produce big improvements. Orenstein doesn’t say this. She fails to ask why the Komen Foundation has totally emphasized cure (“race for the cure”) over prevention due to lifestyle change. In a long piece, here is all she says about lifestyle-focused prevention:

Many [scientists and advocates] brought up the meager funding for work on prevention. In February, for instance, a Congressional panel made up of advocates, scientists and government officials called for increasing the share of resources spent studying environmental links to breast cancer. They defined the term liberally to include behaviors like alcohol consumption, exposure to chemicals, radiation and socioeconomic disparities.

Nothing about how the “meager funding” was and is a huge mistake. Xeni Jardin of Boing Boing called Orenstein’s article “ a hell of a piece“. Fran Visco, the president of the National Breast Cancer Coalition, praised Orenstein’s piece and wrote about preventing breast research via a vaccine. Jardin and Visco, like Orenstein, failed to see the elephant in the room.

Almost all breast-cancer research money has gone to medical school professors (most of whom are men). They don’t do lifestyle research, which is low-tech. They do high-tech cure research. Breast cancer screening, which is high-tech, agrees with their overall focus. High-tech research wins Nobel Prizes, low-tech research does not. For example, those who discovered that smoking causes lung cancer never got a Nobel Prize. Health journalists, most of whom are women, apparently fail to see and definitely fail to write how they (and all women) are harmed by this allocation of research effort. The allocation helps the careers of the researchers (medical school professors); it hurts anyone who might get breast cancer.

The Blindness of Scientists: The Problem isn’t False Positives, It’s Undetected Positives

Suppose you have a car that can only turn right. Someone says, Your car turns right too much. You might wonder why they don’t see the bigger problem (can’t turn left).

This happens in science today. People complain about how well the car turns right, failing to notice (or at least say) it can’t turn left. Just as a car should turn both right and left, scientists should be able to (a) test ideas and (b) generate ideas worth testing. Tests are expensive. To be worth the cost of testing, an idea needs a certain plausibility. In my experience, few scientists have clear ideas about how to generate ideas plausible enough to test. The topic is not covered in any statistics text I have seen — the same books that spend many pages on to how to test ideas.

Apparently not noticing the bigger problem, scientists sometimes complain that this or that finding “fails to replicate”. My former colleague Danny Kahneman is an example. He complained that priming effects were not replicating. Implicit in a complaint that Finding X fails to replicate is a complaint about testing. If you complain that X fails to replicate, you are saying that something was wrong with the tests that established X. There is a connection between replication failure and failure to generate ideas worth testing. If you cannot generate new ideas, you are forced to test old ideas. You cannot test an old idea exactly — that would be boring/repetitive. So you give an old idea a slight tweak and test the variation. For example, someone has shown that X is true in North America. You ask if X is true in South America. You hope you haven’t tweaked X too much. No idea is true everywhere, except maybe in physics, so as this process continues — it goes on for decades — the tested ideas gradually become less true and the experimental effects get weaker. This is what happened in the priming experiments that Kahneman complained about. At the core of priming — the priming effects studied 30 years ago — is a true phenomenon. After reading “doctor” it becomes easier to decide that “nurse” is a word, for example. This was followed by 30 years of drift away from word recognition. Not knowing how to generate new ideas worth testing, social psychologists have ended up studying weak effects (recent priming effects) that are random walks away from strong effects (old priming effects). The weak effects cannot bear the professional weight (people’s careers rest on them) they are asked to carry and sometimes collapse (“failure to replicate”). Sheena Iyengar, a Columbia Business School professor and social psychologist, got a major award (best dissertation) for and wrote a book about a new effect that has turned out to be very close to non-existent. Inability to generate ideas — to understand how to do so — means that what appear to be new ideas (not just variations of old ideas) are more likely to be mistakes. I have no idea whether Iyengar’s original effect was true or not. I am sure, however, that it was weak and made little sense.

Statistics textbooks ignore the problem. They say nothing about how to generate ideas worth testing. I haven’t asked statisticians about this, but they might respond in one of two ways: 1. That’s someone else’s problem. Statistics is about what to do with data after you gather it. That makes as much sense as teaching someone how to land a plane but not how to take off. 2. That’s what exploratory data analysis is for. If I said “E xploratory data analysis can only identify effects of factors that the researcher decided to vary or track. Which is expensive. What about other factors?” they’d be baffled, I believe. In my experience, exploratory data analysis = full analysis of your data. (Many people do only a small fraction, such as 10%, of all reasonable analyses of their data.) Full analysis is better than partial analysis, but calling it a way to find new ideas fails to understand that professional scientists study the same factors over and over.

I suppose many scientists feel the gap acutely. I did. I became interested in self-experimentation most of all because it generated new ideas at a much higher rate (per year) than my professional experiments with rats. I had no idea why, at first, but as it kept happening — my self-experimentation generated one new idea after another. I came to believe that by accident I was doing something “right”. I was doing something that fit a general rule of how to generate ideas, even though I didn’t know what the general rule was.

T he sciences I know about (psychology and nutrition) have great trouble coming up with new ideas. The paleo movement is a response to stagnation in the field of nutrition. The Shangri-La Diet shows what a new idea looks like in the area of weight control. The failure of nutritionists to study fermented foods is ongoing. Stagnation in psychology can be seen in the fact that antidepressants remain heavily prescribed, many years after the introduction of Prozac (my work on morning faces and mood suggests a much different approach), lack of change in treatments for bipolar disorder over the last 50 years (again, my morning-faces work suggests another approach), and in the failure of social psychologists to discover any big new effects in the last ten years.

 

Here is the secret to idea generation: Cheaper tests. To find ideas plausible enough to be worth testing with Test X, you need a way of testing ideas that is cheaper than Test X. The cheaper your test, the larger the region of cause-effect space you can explore. Let’s say Test Y is cheaper than Test X. With Test Y, you can explore more of cause-effect space than you can explore with Test X. In the region unexplored by Test X, you can find points (cause-effect relationships) that pass Test Y. They are worth testing with Test X. My self-experimentation generated new ideas worth testing with more expensive tests because it was much cheaper than existing tests. Via self-experimentation, I could test many ideas too implausible or too expensive to be tested conventionally. Even cheaper than a self-experiment was simply monitoring myself — tracking my sleep, for example. Again and again, this generated ideas worth testing via self-experimentation. I did what all scientists should do: use cheaper tests to generate ideas worth testing with more expensive tests.