Bear Stearns and Self-Experimentation

Understanding and investment go together: The more you understand something, the more you should invest in it. On Friday, Bear Stearns owners thought their stock was worth $30/share; they were utterly wrong, it turned out.

In this sense, self-experimentation — research so cheap it can be done as a hobby — is a statement of complete ignorance. Because it is so cheap, you can test a hundred absurd ideas. If you use more expensive research methods, you cannot afford to test ideas you think are absurd. You must search a smaller solution space. If you are not correct about where the answer to your question will be, the region of possibilities that contains it, your research will fail to find it.

My self-experimentation about why I was waking up too early revealed that I was almost completely ignorant about what I was studying. Two of the causes I found — eating breakfast and not standing enough — were not on my list of possibilities when I started. The Shangri-La Diet is outside the range of weight-loss methods that obesity researchers consider reasonable; without self-experimentation, it would never be tested.

Alcoholism and Self-Experimentation

I’m impressed:

This is the story of Olivier Ameisen, a brilliant physician and cardiologist who developed a profound addiction to alcohol. He broke bones with no memory of falling. He nearly lost his kidneys; he fractured ribs and suffered a hemopneumothorax that left blood and air in the sac around his lungs. He gave up his flourishing practice and, fearing for his life, invested himself in Alcoholics Anonymous and, later, rehab. Nothing worked.
So he did the only thing he could: he took his treatment into his own hands. Searching for a cure for his deadly disease, he discovered baclofen, a muscle relaxant that had proven effective in curing rats addicted to every substance from nicotine and alcohol to cocaine and heroin. Ameisen prescribed himself the drug and, over a two-year period, experimented with the dosage until he reached a level high enough to leave him free of any craving for alcohol. That was four years ago.

Science in Action: Omega-3 (more motor-learning data)

Background. I took 4 T of flaxseed oil during the day (instead of just before bedtime) and measured its effect with a cursor test. The test was how accurately I move the cursor from one point to another with a single movement. The result was a sharp improvement — some of which lasted, some of which didn’t. (Just to be perfectly clear: what’s varied is not my daily amount of flaxseed oil. It’s the time of day I take it. I’m varying the time between a short-lived peak in omega-3 concentration, which happens shortly after ingestion, and doing the cursor test. Usually they are far apart. The interesting data are what happens when I move them close together.)

New data. I tried the same thing again. Here are the results.
2nd test of FSO on cursor accuracy

The green line shows when I took 4 tablespoons of flaxseed oil. I took the oil at 8:30 am. The first test after that, at 9:30 am, showed the improvement. (In previous measurements of the short-term effects, it has taken closer to 2 hours to see the maximum effect.)

Here is a longer view, which emphasizes the constancy of the pre-test baseline.

wider view of results

For comparison, here are the earlier results.

earlier results with this test

Conclusions. When I take 4 T of flaxseed oil, it creates for a few hours a higher-than-usual concentration of flaxseed oil in my blood. I’m pretty sure the active ingredient is omega-3. This has two effects:

  • Better performance due to temporary effects. It’s hard to give these effects a good name. Better coordination, perhaps.
  • Better performance due to long-lasting effects. This is why performance was constant at a lower (better) level after the test than before. The higher-than-usual concentration caused a change (more “learning” than usual) that outlasted it. The concentration of flaxseed oil dropped back to average levels but the learning persisted.
  • Ranjit Chandra and Milk Allergies

    The following letter is from a Swedish professor who was president of the European Society of Pediatric Allergy and Clinical Immunology. Background about Ranjit Chandra.

    Lerum, March 16th 2008

    Dear Prof Roberts,

    The correspondence/letters I have found or remembered are as follows.

    1. In 1993, The European Society of Pediatric Allergy and Immunology, ESPACI) intended to publish a position paper on Cow’s milk allergy(1). In my position as secretary of ESPACI, I wrote that paper in collaboration with the authors listed. We had intense discussions on whether or not we should cite Prof Chandra, whom we all knew, but did not trust, mainly since we found his inclusion criteria and symptoms curious and not according to scientific knowledge at that time. We also opposed, since he had not performed any blinded oral provocation tests and several authors, e.g. Arne Host(2) have found that less than 50 % of those reporting symptoms at exposure had cow’s milk allergy at scheduled blinded oral provocation testing. I wrote a letter to the dean of the university of St John asking whether or not the rumors about Prof Chandra, that his nurse/secretary(?) had produced the results without the involvement of patients, were true. The reply was: “Since the allegations against Prof. Chandra have not been proven or disproven, he is still in office.” I do not find that letter in my files.
    2. In 1997 Ranjit Chandra published a 5 yrs follow up study on his cow’s milk allergic children(3). This paper included DBPCFC. Then some of my colleagues drew the conclusion that everything was in order.
    3. In 1998 we published a second position paper together with the European Society on Pediatric Gastroenterology and Nutrition, ESPGAN, on cow’s milk allergy(4). At that time we accepted the Chandra paper, according to point 2.
    4. In 2003 we were writing up three papers later published in PAI(5-7). These publications were based on papers read during the ESPACI/Section on Pediatrics meeting in Padua, Italy on Dietary prevention of Allergy. Since at that time I was President of ESPACI and Chairman of the Section on Pediatrics within EAACI and organizer of the meeting, I wrote (in collaboration with the speakers) paper I and II and Arne Host and Susanne Halken paper III. Since I was still skeptical of the data by Chandra, I wrote a letter on Feb 15 2003 to the dean of St John’s (enclosed), without any response. The three papers were published in 2004.
    5. January 19 2006 I wrote once again to St John since I never got any response from the dean, correspondence enclosed.
    6. On February 16 2006 I got a response from St John from Prof Strawbridge and responded. On February 20 2006 I got another response and again responded to Prof Strawbridge, Dean of St John, enclosed.
    7. On Feb 24 I got a copy from German Friends and on March 3rd another one from Arne Host on the (enclosed) TV series in CBC on January 29th2006 and later

    The rest you know much better than I do.

    Actually, I don’t know whether my correspondence has any value on a website. But maybe you can use it for your documentation.

    1. Businco L, Dreborg S, Einarsson R, Giampietro PG, Host A, Keller KM, et al. Hydrolysed cow’s milk formulae. Allergenicity and use in treatment and prevention. An ESPACI position paper. European Society of Pediatric Allergy and Clinical Immunology. Pediatr Allergy Immunol 1993 Aug;4(3):101-11.
    2. Host A. Cow’s milk protein allergy and intolerance in infancy. Some clinical, epidemiological and immunological aspects. Pediatr Allergy Immunol 1994;5(5 Suppl):1-36.
    3. Chandra RK. Five-year follow-up of high-risk infants with family history of allergy who were exclusively breast-fed or fed partial whey hydrolysate, soy, and conventional cow’s milk formulas. J Pediatr Gastroenterol Nutr 1997 Apr;24(4):380-8.
    4. Host A, Koletzko B, Dreborg S, Muraro A, Wahn U, Aggett P, et al. Dietary products used in infants for treatment and prevention of food allergy. Joint Statement of the European Society for Paediatric Allergology and Clinical Immunology (ESPACI) Committee on Hypoallergenic Formulas and the European Society for Paediatric Gastroenterology, Hepatology and Nutrition (ESPGHAN) Committee on Nutrition. Arch Dis Child 1999 Jul;81(1):80-4.
    5. Muraro A, Dreborg S, Halken S, Host A, Niggemann B, Aalberse R, et al. Dietary prevention of allergic diseases in infants and small children. Part III: Critical review of published peer-reviewed observational and interventional studies and final recommendations. Pediatr Allergy Immunol 2004 Aug;15(4):291-307.
    6. Muraro A, Dreborg S, Haken S, Host A, Niggemann B, Aalberse R, et al. Dietary prevention of allergic diseases in infants and small children. Part II. Evaluation of methods in allergy prevention studies and sensitization markers. Definitions and diagnostic criteria of allergic diseases. Pediatr Allergy Immunol 2004 Jun;15(3):196-205.
    7. Muraro A, Dreborg S, Halken S, Host A, Niggemann B, Aalberse R, et al. Dietary prevention of allergic diseases in infants and small children. Part I: immunologic background and criteria for hypoallergenicity. Pediatr Allergy Immunol 2004 Apr;15(2):103-11.

    Stoplights, Experimental Design, Evidence-Based Medicine, and the Downside of Correctness

    The Freakonomics blog posted a letter from reader Jeffrey Mindich about an interesting traffic experiment in Taiwan. Timers were installed alongside red and green traffic lights:

    At 187 intersections which had the timers installed, those that counted down the remaining time on green lights saw a doubling in the number of reported accidents . . . while those that counted down until a red light turned green saw a halving in . . . the number of reported accidents.

    Great research! Unexpected results. Simple, easy-to-understand design. Large effects — to change something we care about (such as traffic accidents) by a factor of two in a new way is a great accomplishment. This reveals something important — I don’t know what — about what causes accidents. I expect it can be used to reduce accidents in other situations.

    It’s another example (in addition to obstetrics) of what I was talking about in my twisted skepticism post — the downside of “correctness”. There’s no control group, no randomization (apparently), yet the results are very convincing (that adding the timers caused the changes in accidents). The evidence-based medicine movement says treatment decisions should be guided by results from controlled randomized trials, nothing less. This evidence would fail their test. Following their rules, you would say: “This is low-quality evidence. Controlled experiment needed.” The Taiwan evidence is obviously very useful — it could lead a vast worldwide decrease in traffic accidents — so there must be something wrong with their rules, which would delay or prevent taking this evidence as seriously as it deserves.

    Twisted Skepticism (continued)

    Writing about advances in obstetrics, Atul Gawande, like me, suggests there is a serious downside to being methodologically “correct”:

    Ask most research physicians how a profession can advance, and they will talk about the model of “evidence-based medicine”—the idea that nothing ought to be introduced into practice unless it has been properly tested and proved effective by research centers, preferably through a double-blind, randomized controlled trial. But, in a 1978 ranking of medical specialties according to their use of hard evidence from randomized clinical trials, obstetrics came in last. Obstetricians did few randomized trials, and when they did they ignored the results. . . . Doctors in other fields have always looked down their masked noses on their obstetrical colleagues. Obstetricians used to have trouble attracting the top medical students to their specialty, and there seemed little science or sophistication to what they did. Yet almost nothing else in medicine has saved lives on the scale that obstetrics has. In obstetrics . . . if a strategy seemed worth trying doctors did not wait for research trials to tell them if it was all right. They just went ahead and tried it, then looked to see if results improved. Obstetrics went about improving the same way Toyota and General Electric did: on the fly, but always paying attention to the results and trying to better them. And it worked.

    Is there a biological metaphor for this? A perfectly good method (say, randomized trials) is introduced into the population of medical research methods. Unfortunately for those in poor health, the new method becomes the tool of a dogmatic tendency, which uses it to reduce medical progress.

    Twisted Skepticism

    Scientists are fond of placing great value on what they call skepticism: Not taking things on faith. Science versus religion, is the point. In practice this means wondering about the evidence behind this or that statement, rather than believing it because an authority figure said it. A better term for this attitude would be: Value data.

    A vast number of scientists have managed to convince themselves that skepticism means, or at least includes, the opposite of value data. They tell themselves that they are being “skeptical” — properly, of course — when they ignore data. They ignore it in all sorts of familiar ways. They claim “correlation does not equal causation” — and act as if the correlation is meaningless. They claim that “the plural of anecdote is not data” — apparently believing that observations not collected as part of a study are worthless. Those are the low-rent expressions of this attitude. The high-rent version is when a high-level commission delegated to decide some question ignores data that does not come from a placebo-controlled double-blind study, or something similar.

    These methodological beliefs — that data above a certain threshold of rigor are valuable but data below that threshold are worthless — are based on no evidence; and the complexities and diversity of research imply it is highly unlikely that such a binary weighting is optimal. Human nature is hard to avoid, huh? Organized religions exist because they express certain aspects of human nature, including certain things we want (such as certainty); and scientists, being human, have a hard time not expressing the same desires in other ways. The scientists who condemn and ignore this or that bit of data desire a methodological certainty, a black-and-whiteness, a right-and-wrongness, that doesn’t exist.

    How to be wrong.

    If Not Noseclips, Dark Sunglasses?

    In this interesting video about losing weight, Paul McKenna, a British hypnotist, recreates a study in which people ate food blindfolded. In the study, they ate one-quarter less when blindfolded than when not blindfolded. This doesn’t impress me; nothing is stopping the blindfolded subjects from eating more at later meals. But it makes me wonder how not seeing your food affects flavor-calorie learning. It might make it stronger (you’re less distracted) or it might make weaker (the sight of food acts like glue to strengthen flavor-calorie associations — there is actually evidence for something like this).

    While wearing noseclips while eating with others is too weird, wearing dark sunglasses might not be. And what about listening to music (for distraction) while you eat? My calorie learning experiments are continuing; eventually I should be able to test these possibilities.

    Thanks to Gary Skaleski.

    Science in Action: Omega-3 (motor-learning surprise, continued)

    The results I described in the previous post surprised me because (a) my performance suddenly got better after being stable for many tests and (b) after the improvement, further practice appeared to make my performance worse. I’d never before seen either result in a motor learning situation. If you can think of an explanation of the result that practice makes performance worse, and animal learning isn’t your research area, please let me know.

    Learning researchers used to think of associative learning as a kind of stamping-in process. The more you experience A and B together, the stronger the association between them. Simple as that. In the 1960s, however, several results called this idea into question. Situations that should have caused learning did not. The feature that united the various results was that in each case, learning didn’t happen when the animal already expected the second event. If A and B occur together, and you already expect B, there is no learning. Theories that explained these findings — the Rescorla-Wagner model is the best known, but the Pearce-Hall model is the one that appears to be correct — took the discrepancy between expected and observed — an event’s “surprise factor” — rather than simply the event itself, to be what causes learning. We are constantly trying to predict the future; only when we fail do we learn.

    In my motor-learning task, imagine that the brain “expects” a certain accuracy. When actual accuracy is less, performance improves. Performance stops improving when actual accuracy equals expected accuracy. The effect of more omega-3 in the blood, and therefore the brain, was to increase expected accuracy. (One of the main things the brain does is learn. If we do something that improves brain performance in other ways, it is plausible that it will also improve learning ability.) Thus the sudden improvement. The decrement in accuracy with further practice came about because, when the omega-3 concentration went down, actual accuracy was better than expected accuracy. Accuracy was “over-predicted,” a learning theorist might say. So the observed change in performance was in the opposite-from-usual direction. Accuracy got worse, not better.

    Related happiness research. “Christensen’s study was called “Why Danes Are Smug,” and essentially his answer was it’s because they’re so glum and get happy when things turn out not quite as badly as they expected.”