Omega-3 and Arithmetic (evaluation)

When I read an empirical scientific paper I ask four main questions:

1. How clear is the effect or correlation? Generally measured by p values.

2. How clear is the cause of the effect?

3. How much can we generalize from this?

4. Assuming we can generalize, how much will this change what anyone does?

The overall value is something like the product of the answers. Most research gets a modest score on #1 (because a high score would be overkill and, anyway, the low-hanging fruit has been picked) and a low score on #4. Experiments get a high score on #2, surveys a low score.

Tim Lundeen’s little experiment that I described a few days ago, in which he found that a higher dose of DHA improved his arithmetic ability, gets a very high score:

1. The effect is very clear.

2. It’s an experiment. Because the variation was between two plausible doses of a food supplement, I doubt it’s a placebo effect.

3. The subject, the treatment, and the test are “ordinary” — e.g., Tim does not fall into a special group that might make him more likely to respond to the treatment.

4. Who wouldn’t want to improve how well their brain works?

From the point of view of a nutrition scientist, I’d guess, the effect is shockingly clear and direct. Experimental nutrition with humans almost always measures correlates of disease (e.g., correlates of heart disease) rather than disease. To me, an experimental psychologist, the results are shockingly useful. Practically all experimental psychology results (including mine) have little use to most people. The clarity of the effect does not quite shock me but I’m very impressed.

Leave a Reply

Your email address will not be published. Required fields are marked *