Journal of Personal Science: Effect of Meditation on Math Speed


by Peter Lewis

Background

I’ve been practicing meditation on and off for years. It doesn’t interest me in a spiritual sense; I do it because I think it improves my mental function. However, what I’ve read suggests there isn’t a lot of evidence to support that. For example, John Horgan in Scientific American:

Meditation reportedly reduces stress, anxiety and depression, but it has been linked to increased negative emotions, too. Some studies indicate that meditation makes you hyper-sensitive to external stimuli; others reveal the opposite effect. Brain scans do not yield consistent results, either. For every report of heightened neural activity in the frontal cortex and decreased activity in the left parietal lobe, there exists a contrary result.

From a 2007 meta-analysis of 800+ studies:

Most clinical trials on meditation practices are generally characterized by poor methodological quality with significant threats to validity in every major quality domain assessed.

Most of this research asked questions different than mine. The studies used physical measures like blood pressure, studied complex states like depression and stress, or isolated, low-level “executive functions” like working memory. My question was simpler: Is mediation making me smarter? “Smarter” is a pretty complex thing, so I wanted to start with a broad, intuitive measure. There’s a free app called Math Workout (Android, iPhone) that I’ve been using for years. It has a feature called World Challenge that’s similar to what Seth developed to test his own brain function: it gives you fifty arithmetic problems and measures how fast you solve them. Your time is compared to all other users in the world that day. This competitive element has kept me using it regularly, even though I had no need for better math skills.

Study Design

I only had about a month, so I decided on a 24-day experiment.

Measurement. Every day for the whole experiment, I completed at least four trials with Math Workout: three successive ones in the morning, within an hour of waking up, and at least one later in the day. For each trial, I recorded my time, number of errors and the time of day. Math Workout problems range from 2+2 to squares and roots. The first ten or so are always quite easy and they get more difficult after that, but this seems to be a fixed progression, unrelated to your performance. Examples of difficult problems are 3.7 + 7.3, 93 + 18, 14 * 7, and 12² + √9. If you make a mistake, the screen flashes and you have to try again on the same problem until you get it right. As soon as you answer a problem correctly, the next one appears.

Treatment. I used an ABA design. For the first seven days, I just did the math, with no meditation. (I hadn’t been meditating at all during the 3-4 weeks before the start of the experiment.) For the next ten days, I meditated for at least ten minutes every morning within an hour of waking, and did the three successive math trials immediately afterward. I did a simple breath-counting meditation, similar to what’s described here. The recorded meditations that I gave the other participants were based on Jon Kabat-Zinn’s Mindfulness Based Stress Reduction program and also focused on awareness of breathing, though without the counting element. The final seven days were a second baseline period, with no meditation.

Before beginning, I posted about this experiment on Facebook, and I was pleasantly surprised to get eleven other volunteers who were willing to follow the same protocol and share their data with me. I set up online spreadsheets for each participant where they could enter their results. I also emailed them a guided ten-minute meditation in mp3 format. It was a fairly simple breathing meditation, secular and non-denominational.

Results

Meditation had a small positive effect. During the meditation period, my average time to correctly answer 50 problems was 75 seconds, compared to 81 during the first baseline — a drop of 7% — and the times also dropped slightly over the ten days (slope of trendline: -0.6 seconds/day). When I stopped meditating, my times trended sharply back up (slope: 1.0 seconds/day) to an average of 78 seconds during the second baseline period. These trends suggest that the effect of meditation increased with time, which is in line with what most meditaters would tell you: the longer you do it consistently, the better it works. My error rates were more flat — from 2.1 errors per 50 correct answers in the first baseline period, to 2.2 during the meditation period and 2.5 during the second baseline — and did not display the same internal trends.

(click on the graph for a larger version)

med_PL_small

Of the other eleven subjects, six of them stuck with the experiment till the end. Their data was messier, because they were new to the app and there’s a big practice effect. Because of this, I was less focused on finding a drop from the first control period to the meditation (which you’d expect anyway from practice) and looking more for an increase in times in the second control period (which you wouldn’t expect to see unless the meditation had been helping).

Taking that into account, three of the six subjects seemed to me to display a similar positive effect to mine. Two I’d call inconclusive, and one showed a clear negative effect. (Here is the data for these other subjects.)

What I Learned

I found these results encouraging. Like Seth, I take this kind of basic math exercise to be a good proxy for general brain function. Anything that makes me better at it is likely to also improve my performance on other mental tasks. As I mentioned above, I’ve been using this particular app for years, and my times plateaued long ago, so finding a new factor that produces a noticeable difference is impressive. An obvious concern is that I was trying harder on the days that I meditated. Since it’s impossible to “blind” subjects as to whether they’ve meditated or not, I can’t think of a perfect way to correct for this. If meditation does make me faster at math, what are the mechanisms? For example, does it improve my speed at processing arithmetic problems, or my speed of recall at the ones that I knew from memory (e.g. times tables), or my decisiveness once I think I have an answer? It felt like the biggest factor was better focus. I wasn’t solving the problems faster so much as cutting down on the fractional seconds of distraction between them.

Improvements

It would have helped to have a longer first control period, as Seth and others advised me before I began. I was scheduled to present my results at this conference and at the time it was only a month away, so I decided to make the best of the time I had. Next time I’ll have a three- or four-week baseline period, especially if I’m including subjects who haven’t meditated before. The single biggest improvement would be to recruit non-meditators to follow the same protocol. Most of the other volunteers, like me, were interested because they were already positively disposed towards meditation as a daily habit. I don’t think they liked the idea of baseline periods when they couldn’t meditate, and this probably contributed to the dropout rate. (If I’d tried to put any of them in a baseline group that never meditated at all and just did math, I doubt any of that group would have finished.) It might be easier to recruit people who already use this app (or other math games) and get them to meditate than vice versa. That would also reduce the practice effect problem, and the effects of meditation might be stronger in people who are doing it for the first time. More difficult math problems might be a more sensitive measure, since I wouldn’t be answering them from memory. Nothing super-complex, just two- or three-digit numbers (253 + 178).

I’m planning to repeat this experiment myself at some point, and I’m also interested in aggregating data from others who do something similar, either in sync with me as above, or on your own timeline and protocol. I’d also appreciate suggestions for how to improve the experimental design.

Comment by Seth

The easiest way to improve this experiment would be to have longer phases. Usually you should run a phase until your measure stops changing and you have collected plenty of data during a steady state. (What “plenty of data” is depends on the strength of the treatment you are studying. Plenty of data might be 5 points or 20 points.) If it isn’t clear how long it will take to reach steady state, deciding in advance the length of a phase is not a good idea.

Another way to improve this experiment would be to do statistical tests that generate p values; this would give a better indication of the strength of the evidence. Because this experiment didn’t reach steady states, the best tests are complicated (e.g., comparison of slopes of fitted lines). With steady-state data, these tests are simple (e.g., comparison of means).

If you are sophisticated at statistics, you could look for a time-of-day effect (are tests later in the day faster?), a day-of-week effect, and so on. If these effects exist, their removal would make the experiment more sensitive. In my brain-function experiments, I use a small number of problems so that I can adjust for problem difficulty. That isn’t possible here.

These comments should not get in the way of noticing that the experiment answered the question Peter wanted to answer. I would follow up these results by studying similar treatments: listening to music for 10 minutes, sitting quietly for 10 minutes, and so on. To learn more about why meditation has an effect. The better you understand that, the better you can use it (make the effect larger, more convenient, and so on).

 

Tsinghua Graduation Memento Statement

The first class of Tsinghua psychology majors in a half-century is graduating in a few days. (The Tsinghua psychology department was closed in the 1950s — Soviet-style university reorganization — and reopened in 2008.) The seniors asked their professors for statements to be included in a memento book. My contribution:

I remember our first day of class (Frontiers of Psychology). It was my first time teaching in China. It was on a Monday, maybe it was your first class at Tsinghua. Some things surprised me. Moving from students in the front row to students in the back, English ability got worse. Each student said their name. When one student said her Chinese name, everyone laughed. I still do not understand this. This had never happened in my American classes. A student had her picture taken with me. This too never happened in America. There were two graduate students in the class. Both of them volunteered to be teaching assistants. In America, no graduate students attended my undergraduate classes, and you need to pay them a lot of money to be teaching assistants. (At Tsinghua, that was the only time graduate students came to my class.) The graduate student who became my teaching assistant told you, “Don’t say My English is poor. Say My English is on the way.” I can tell you now I disagree. It is confusing to say My English is on the way. There is nothing wrong with saying My English is poor. I say 我的汉语不很好 all the time. We were all so new that we weren’t sure when class ended! That was the first thing you made me learn: The length of a class period. I enjoyed having dinner with you. You were less afraid of me than my Berkeley students. I especially remember dinner with 徐胜眉, who told me the Chinese side of the debate about the Chinese takeover of Tibet. Most people in America, including professors like me, had no idea there is another side. I had had a big gap in my knowledge and hadn’t even realized it. The most important thing I learned from you was how to teach better. The homework you did was very good but I was puzzled how to grade it. From talking with you at dinner and listening to you in class, I could tell that all of you were excellent students. It did not seem like a good idea to make it difficult to get the highest grade, but what was the alternative? This was the puzzle that you pushed me to solve. Eventually I changed how I teach quite a bit, as you may know from talking to students from last year’s Frontiers of Psychology. Thank you for that, and may you teach your future teachers as well as you taught me.

Because my students were so good, they made me see the deficiencies in usual teaching methods especially clearly. It really did seem idiotic to take perfectly good work and carefully divide it into piles of best, good, and less good (and give each pile a different grade). Surely there were better uses of my time than making such distinctions and better uses of their time and mental energy than trying to do exactly what I wanted.

When I visited Berkeley to be considered for an assistant professor job, one of the interviews was with graduate students. One of them asked, “Which do you like better, teaching or research?” “Research,” I said. They laughed. All Berkeley professors prefer research, but you’re supposed to say you like them equally. I was unaware of this. I did like research more, and still do, which is why I am surprised that I talk about teaching so much. I told a friend at lunch recently that it was weird how much I talk about my teaching ideas.

Why Do Magic Dots Work?

2013-06-24 counting method

I’ve posted several times about the use of what I call “magic dots” to get things done. You make a dot or line every six minutes of work. I use the counting method shown above. The effect was first seen in pigeons. A similar effect was discovered (by accident) in rats.

It works amazingly well. “The magic dots have been magic,” said a user named Joan. It would be nice to know why — maybe the effect can be made even stronger. Joan commented in an email:

I have been thinking about why this has worked for me – I think it’s that there is an almost immediate “reward”, so I get started right away. Since the reward does not have any associations, there’s no inner conflict sabotaging it. For instance, I might feel guilty if I ate a jelly bean every 6 minutes, or I might just eat them anyway. I’m not “deprived” if I don’t get to add more dots and lines, and I know I can just get back to work and start writing dots again.

Certainly the dots – or the act of making a dot — act as a reward. But why? If I’m writing something, why do the dots have an effect when I can already see my progress by looking at what I’ve written? I’m already making marks.

The consistency of the marks — the same mark, again and again — may make a difference. Presumably the brain needs to notice a correlation: Writing (or whatever the difficult task is) produces both marks and progress (= a sense of satisfaction). Other activities produce neither. The more identical the marks, the easier to see the correlation. When I write, there is not one consistent mark of progress.

Maybe other people have independently discovered this, without knowing about the pigeon results. Their methods might shed light on what you need to do to get the effect. I don’t know of any independent discoveries. The closest thing I can think of is most computer games provide markers of your progress throughout the game, such as level advancement.

David Grimes Responds to Comments

In recent posts (here, here, and here), I’ve described the ideas of David Grimes, a British doctor, about the cause of heart disease. Grimes recently responded to comments on the last post:

First, to develop the latitude theme, that distance from the equator determines risk of heart disease, cancers, multiple sclerosis and others. Four visual pieces of evidence for you.

Sunshine_Average_1971-2000_1 (1)

The sunshine map of the UK: We see what would also be the map of multiple sclerosis and CHD in the UK — both diseases most common in the west of scotland and least common in the south-east of England. Similar pattern of average life expectancy.

Look at cancer incidence in North America for another latitude effect.

Then there is breast and colon cancer in Europe:

But the [most] important observation of the sun being protective against cardiovascular disease comes from the USA. A latitude effect is present but weak. However a longitude effect is powerful. It works out as an altitude effect — the higher the altitude of residence the lower the risk of death from cardio-vascular disease (coronary heart disease + stroke). It is interesting to note the mirror image of the land profile from east to west and the CVD death profile. This can be explained most simply and most plausibly by the higher UV exposure at higher altitudes.

This is a powerful supplement to the latitude observations in Europe. The [north-south] length of Europe is worth remembering: the north of Scotland is the same latitude as Hudson Bay. In the north of England I live further north than anywhere in China. This means big sun exposure effects.

The size of the disease differences is impressive — e.g., a factor of 2. I think these sunshine correlations are due either to a protective effect of Vitamin D or a protective effect of sleep (more sunshine = better sleep). There’s no doubt that sleep quality depends on the amplitude of a circadian rhythm (greater amplitude = better sleep), which in turn depends on the amplitude of the sunlight intensity rhythm, the day-night difference.

Assorted Links

  • Walking after a meal improves blood sugar
  • A look at QSers. “S ome of the most societally redefining concepts now emerge from edge-thinkers, who are increasingly visible, organized, and effective, in part due to the Web. Even so, whenever I spoke to them or read their blogs, at some point I always wondered, why?”
  • Steve McIntyre vindicated. RealClimate says: “That is the most disquieting legacy of Steve McIntyre and ClimateAudit [McIntyre’s blog]. The real Yamal deception is their attempt to damage public confidence in science by making speculative and scandalous claims about the actions and motivations of scientists while cloaking them in a pretense of advancing scientific knowledge.” A comment on ClimateAudit: “It’s quite obvious that in 2009 and again in 2011, you shamelessly plagiarised Briffa 2013.”

Thanks to Jazi Zilber and Phil Alexander.

Does Alternate-Day Fasting Lower HbA1c?

This graph shows my HbA1c values in recent years. After a lot of variation, they settled down to 5.8, which was the measurement a month ago. 5.8 isn’t terrible — below 6.0 is sometimes called “okay”) — but there is room for improvement. In a large 2010 study, average HbA1c was 5.5. The study suggested that a HbA1c of about 5.0 was ideal.

Three weeks ago I started alternate-day fasting (= eating much less than usual every other day) for entirely different reasons. Although people sometimes find alternate-day fasting unpleasant (they get too hungry on the fast days), I haven’t noticed this. I blogged recently that within days of starting, my fasting blood sugar levels greatly improved. Yesterday I got my HbA1c measured again. It was 5.4 — much better. This supports the idea that alternate-day fasting is helping a lot. HbA1c measures glucose in the blood over 8-12 weeks so there could easily be more improvement.

“Whether intermittent fasting can be used as a tool to prevent diabetes in those individuals at high risk or to prevent progression in those recently diagnosed with Type 2 diabetes remains a tantalizing notion,” said an author of a recent paper on the subject. My experience suggests that you can easily find out for yourself if intermittent fasting will help. It took only a week to be sure that my fasting blood sugar had improved and only three weeks to have a good idea that my HbA1c has improved. My improvement was almost as fast and clear as what happens when people with a vitamin deficiency are given the vitamin they need.

There are countless ways of doing alternate-day fasting (or, more generally, intermittent fasting). A clinical trial usually tests just one way, which you may not want to copy exactly. My results suggest that blood sugar measurements provide an easy way to tell if your particular version of intermittent fasting is helping.