I realized (in both senses) several ways to improve my omega-3 self-experimentation:
1. Simpler treatment. I had been drinking both walnut oil and flaxseed oil. For the sake of simplicity, I stopped the walnut oil. I continued to drink 2 tablespoons/day of flaxseed oil. I will vary the amount of flaxseed oil.
2. More controlled measurement. Instead of balancing on any part of my right foot, I started balancing on only the balls of my right foot.
3. More measurement. I measure my balance once/day. During that one session I had been measuring my balance 20 times (measuring how long I could stand on a platform before falling off — 20 durations). The first 5 durations were warm-up, leaving 15 durations that counted. I increased the total number of trials to 30. It was still easy; the whole thing takes about 10 minutes.
4. A new measure. Anything that affects balance is likely to affect other mental abilities, I believe. To test this belief, I will start measuring my brain in a new way: a pencil-and-paper version of Saul Sternberg’s memory-scanning task. It will take about 5 minutes.
I started #1-3 about a week ago and will start #4 today.
While I appreciate your overarching message of valuing bringing experimentation into more people’s lives, at first glance (I just came to your blog, and this is the first entry I’ve read) this particular experiment you’ve designed seems fraught with problems.
Going from least concerning to most:
There’s the issue of respondent/participant bias, with your own vested interest in finding change/results. Also, you didn’t just come across this intervention randomly, and that same belief or hunch that led to you choosing this intervention (and the implied hypothesis being tested) also begs the worry of placebo effect. In short, there are both conscious and unconscious ways in which you’re likely to distort your own results.
More troubling is the obvious role that “practice effect” can play here. I’d fully expect your balancing ability to improve as you continue, as there are countless anecdotal examples of human balance skills where people improve their skills through repetition practice. And while adding another measurement may be handy for your conceptual validity, it’s yet another measure that is subject to the practice effect. (Sternberg used control groupings, which you aren’t.)
Practice effects are dealt with by the experimental design: You continue the measurements until the improvement stops. Then you begin the new treatment. As for the placebo-effect explanation of improvement, that’s an explanation I plan to test.
Just to be clear, are you saying that, prior to beginning any treatment with the flackseed oil and/or walnut oil, you kept this identical regiment of balance measurement up until your rate of improvement stabilized? Really? You never mentioned that in your description. Moreover, you’ve very clearly described that you’re currently mid-treatment and have now thought of adding this new memory instrument–so certainly no designed control of the practice effect there.
Moreover, while these steps would at least be a nod to the problem of practice effect, they’re not a viable solution. Do you have any data that show that learned improvement around your balancing challenge and the memory instrument is continual? There are plenty of cases of learning, both cognitive and physical, where plateaus and spurts are common. How do we know that any plateau really represents an end to change attributable to practice-based learning? Short answer: we don’t.
I think it’s wonderful to find creative ways to bring research into our individual lives. But let’s also remember that, as scientists, we don’t design control groups in on a lark. It’s expensive, resource intensive, you name it. And this case is quickly striking me as an example of why sometimes that additional work, away from anecdotal single-case experience, is needed.
“There are plenty of cases of learning, both cognitive and physical, where plateaus and spurts are common.” I don’t know of a single example where a motor task (such as balance) suddenly improves after reaching a plateau. What’s an example?
These experiments also involve sudden decrements in performance, which of course cannot be explained as a practice effect.
I’d say that most attempts at improving motors tasks–be it balancing on a board, learning to juggle, etc. are typified by nonlinear progress. It’s certainly the case with physical rehabilitation of motor skills (though of course there’s a question if that’s analogous to learning). Plateaus, and even dips, etc. are common. Moreover, as a researcher, the burden of proof is yours. You’re choosing a novel instrument/test for data, it’s up to you to either test how the practice effect plays out with that instrument, or–more sensibly–control for the learning.
Could you give a specific example, not involving physical rehabilitation, of a motor task where somebody observed sudden improvement after a plateau? With a reference, hopefully, so that the rest of us can find out about it?