Methodological Lessons From My One-Legged-Standing Experiment

A few days ago I described an experiment that found standing on one leg improved my sleep. Four/day (= right leg twice, left leg twice) was better than three/day or two/day. I didn’t know that. For a long time I’d done two/day.
I think the results also contain more subtle lessons. At the level of raw methodology, I found that context didn’t matter. The effect of four/day was nearly the same when (a) I measured that effect using four days in a randomized design (where the dose for each day is randomly chosen from two, three, and four) and when (b) I measured that effect using a dose of four day after day. Suppose I want to compare three and four. Which design should I use: (a) 3333344444, (b) 3434343434, or (c) 4433343434 (randomized)? The results suggest it doesn’t matter.

The experiment didn’t take long (a few months) but it took me a long time to begin. I noticed the effect behind it (one-legged standing improves sleep) two years ago. Why did I wait so long to do an experiment about details?

I was already collecting the data (on paper) — writing down how long I slept, rating how rested I felt, etc. But I wasn’t entering that data in my laptop. To transfer months of data into my laptop required motivation. Most of my self-experimentation has been motivated by the possibility of big improvements — much less acne, much better mood, and so on. That wasn’t possible here. I slept well, night after night.

What broke the equilibrium of doing nothing? A growing sense of loss. I knew I was throwing away something by not doing experiments (= doing roughly the same thing day after day). The longer I did nothing, the more I lost. To say this in an extreme way: I had discovered a way to improve sleep that was unconnected to previous work — sleep experts haven’t heard of anything like it. It was real progress. To fail to figure out details was like finding a whole new place and not looking around. Moreover, the experiments wouldn’t even be difficult. The treatment takes less than a day and you measure its effect the next morning. This is much easier than lots of research. Suppose you know that radioactivity is bad and you discover something radioactive in your house. A sane person would move that radioactive thing as far away as possible — minimizing the harm it does. I had discovered something beneficial yet wasn’t trying to maximize the benefits. Crazy!

An early lesson I learned about experimentation is to run each condition much longer than might seem necessary. If you think a condition should last a week, do it for a month. Things will turn out to be more complicated than you think, having more data will help you deal with the additional complexity that turns up. Now it was clear I had gone too far in the direction of passivity. I did the experiment, it was helpful, I could have done it a year ago.

3 thoughts on “Methodological Lessons From My One-Legged-Standing Experiment

  1. I think a public website for executing experiments, automatically handling data and room for comments will be great.

    Benefits.
    1) a database of experiments will allow others to usemthe information. And provide a wide range of idea for those wh want to start

    2) experimental convenience. If thte tools are good it can be very helpful (a software can even suggest variations and so on)

    3) motivation. The public attention and use can motiate very much

  2. Think how useful would it be just to have all your self experimental results tabu lated in lines with links to longer discussions,

    In one look one can figure waht can be relefant for him and also get an impression hiw strong the idea is.

    Duuble this with an assortment of dozens of such, (not to talk about interaction effects)

Leave a Reply

Your email address will not be published. Required fields are marked *