Saturated Fat and Heart Attacks

After I discovered that butter made me faster at arithmetic, I started eating half a stick (66 g) of butter per day. After a talk about it, a cardiologist in the audience said I was killing myself. I said that the evidence that butter improved my brain function was much clearer than the evidence that butter causes heart disease. The cardiologist couldn’t debate this; he seemed to have no idea of the evidence.

Shortly before I discovered the butter/arithmetic connection, I had a heart scan (a tomographic x-ray) from which is computed an Agaston score, a measure of calcification of your blood vessels. The Agaston score is a good predictor of whether you will have a heart attack. The higher your score, the greater the probability. My score put me close to the median for my age. A year later — after eating lots of butter every day during that year — I got a second scan. Most people get about 25% worse each year. My second scan showed regression (= improvement). It was 40% better (less) than expected (a 25% increase). A big increase in butter consumption was the only aspect of my diet that I consciously changed between Scan 1 and Scan 2.

The improvement I observed, however surprising, was consistent with a 2004 study that measured narrowing of the arteries as a function of diet. About 200 women were studied for three years. There were three main findings. 1. The more saturated fat, the less narrowing. Women in the highest quartile of saturated fat intake didn’t have, on average, any narrowing. 2. The more polyunsaturated fat, the more narrowing. 3. The more carbohydrate, the more narrowing. Of all the nutrients examined, only saturated fat clearly reduced narrowing. Exactly the opposite of what we’ve been told.

As this article explains, the original idea that fat causes heart disease came from Ancel Keys, who omitted most of the available data from his data set. When all the data were considered, there was no connection between fat intake and heart disease. There has never been convincing evidence that saturated fat causes heart disease, but somehow this hasn’t stopped the vast majority of doctors and nutrition experts from repeating what they’ve been told.

Teeth Clenching Can Release Too Much Mercury

Recently the Berkeley City Council heard testimony about a proposed ban on mercury amalgam dental fillings. A young man named D— M—, shown in the video, told the Council that he had grown up in Berkeley and had gotten mercury amalgam fillings from local dentists. They did not tell him the fillings were dangerous. He attended Berkeley High, Harvard, and finally the clinical psychology program at UC Berkeley — which I know is extremely hard to get into, as he says. They accept about 1 in 500 applicants.

In 2007, three years into the program, he started clenching his teeth. He began to have problems resembling mercury poisoning, such as fatigue and poor concentration. He had to leave the psychology program. Hair tests showed large amounts of mercury. He did not eat unusual amounts of fish, so it’s likely that his fillings were the source of the mercury. By 2012, he could no longer work and pay rent.

I had no idea that teeth clenching and mercury fillings were so dangerous together. A few years ago, I found, to my surprise, that removal of mercury fillings improved my score on the reaction time test I use to measure brain function. At first, I had thought the improvement had other causes. Only when I tested these causes and found no supporting evidence did I look further and discover the improvement had started exactly when I got my fillings removed. After I discovered this, I looked around for other evidence that mercury fillings were dangerous. To my surprise (again), my evidence seemed more persuasive than anything I found. M—’s story is much scarier than mine and supports my conclusion that mercury fillings are dangerous.

Had M— been using my reaction-time test day after day, he might have discovered deterioration on that test before he noticed other problems. The test might have provided early warning. I hadn’t noticed problems with concentration or fatigue, yet when my fillings were removed I got better on my test. Had M— noticed the problem earlier, he might have figured out the cause earlier.

If you don’t monitor yourself as I do — and almost no one does — you are trusting your dentist, your doctor, your food providers, and so on, to be well-informed and truthful about the safety of their products. If the problems aren’t obvious, there is plenty of reason for them to put their hands over their eyes and say “I don’t want to know” about problems with their products. Drug companies have often hidden the dangers of their products and surgeons have hidden the dangers of their procedures. Few people grasp that “evidence-based medicine”, with its disregard of bad side effects, is biased in favor of doctors. (Ben “Bad Science” Goldacre is a prominent example of someone who fails to understand this.) If you monitor yourself you are less at the mercy of other people’s poor science, lies, and motivations that conflict with finding and telling the truth.

Journal of Personal Science: Effect of Meditation on Math Speed


by Peter Lewis

Background

I’ve been practicing meditation on and off for years. It doesn’t interest me in a spiritual sense; I do it because I think it improves my mental function. However, what I’ve read suggests there isn’t a lot of evidence to support that. For example, John Horgan in Scientific American:

Meditation reportedly reduces stress, anxiety and depression, but it has been linked to increased negative emotions, too. Some studies indicate that meditation makes you hyper-sensitive to external stimuli; others reveal the opposite effect. Brain scans do not yield consistent results, either. For every report of heightened neural activity in the frontal cortex and decreased activity in the left parietal lobe, there exists a contrary result.

From a 2007 meta-analysis of 800+ studies:

Most clinical trials on meditation practices are generally characterized by poor methodological quality with significant threats to validity in every major quality domain assessed.

Most of this research asked questions different than mine. The studies used physical measures like blood pressure, studied complex states like depression and stress, or isolated, low-level “executive functions” like working memory. My question was simpler: Is mediation making me smarter? “Smarter” is a pretty complex thing, so I wanted to start with a broad, intuitive measure. There’s a free app called Math Workout (Android, iPhone) that I’ve been using for years. It has a feature called World Challenge that’s similar to what Seth developed to test his own brain function: it gives you fifty arithmetic problems and measures how fast you solve them. Your time is compared to all other users in the world that day. This competitive element has kept me using it regularly, even though I had no need for better math skills.

Study Design

I only had about a month, so I decided on a 24-day experiment.

Measurement. Every day for the whole experiment, I completed at least four trials with Math Workout: three successive ones in the morning, within an hour of waking up, and at least one later in the day. For each trial, I recorded my time, number of errors and the time of day. Math Workout problems range from 2+2 to squares and roots. The first ten or so are always quite easy and they get more difficult after that, but this seems to be a fixed progression, unrelated to your performance. Examples of difficult problems are 3.7 + 7.3, 93 + 18, 14 * 7, and 12² + √9. If you make a mistake, the screen flashes and you have to try again on the same problem until you get it right. As soon as you answer a problem correctly, the next one appears.

Treatment. I used an ABA design. For the first seven days, I just did the math, with no meditation. (I hadn’t been meditating at all during the 3-4 weeks before the start of the experiment.) For the next ten days, I meditated for at least ten minutes every morning within an hour of waking, and did the three successive math trials immediately afterward. I did a simple breath-counting meditation, similar to what’s described here. The recorded meditations that I gave the other participants were based on Jon Kabat-Zinn’s Mindfulness Based Stress Reduction program and also focused on awareness of breathing, though without the counting element. The final seven days were a second baseline period, with no meditation.

Before beginning, I posted about this experiment on Facebook, and I was pleasantly surprised to get eleven other volunteers who were willing to follow the same protocol and share their data with me. I set up online spreadsheets for each participant where they could enter their results. I also emailed them a guided ten-minute meditation in mp3 format. It was a fairly simple breathing meditation, secular and non-denominational.

Results

Meditation had a small positive effect. During the meditation period, my average time to correctly answer 50 problems was 75 seconds, compared to 81 during the first baseline — a drop of 7% — and the times also dropped slightly over the ten days (slope of trendline: -0.6 seconds/day). When I stopped meditating, my times trended sharply back up (slope: 1.0 seconds/day) to an average of 78 seconds during the second baseline period. These trends suggest that the effect of meditation increased with time, which is in line with what most meditaters would tell you: the longer you do it consistently, the better it works. My error rates were more flat — from 2.1 errors per 50 correct answers in the first baseline period, to 2.2 during the meditation period and 2.5 during the second baseline — and did not display the same internal trends.

(click on the graph for a larger version)

med_PL_small

Of the other eleven subjects, six of them stuck with the experiment till the end. Their data was messier, because they were new to the app and there’s a big practice effect. Because of this, I was less focused on finding a drop from the first control period to the meditation (which you’d expect anyway from practice) and looking more for an increase in times in the second control period (which you wouldn’t expect to see unless the meditation had been helping).

Taking that into account, three of the six subjects seemed to me to display a similar positive effect to mine. Two I’d call inconclusive, and one showed a clear negative effect. (Here is the data for these other subjects.)

What I Learned

I found these results encouraging. Like Seth, I take this kind of basic math exercise to be a good proxy for general brain function. Anything that makes me better at it is likely to also improve my performance on other mental tasks. As I mentioned above, I’ve been using this particular app for years, and my times plateaued long ago, so finding a new factor that produces a noticeable difference is impressive. An obvious concern is that I was trying harder on the days that I meditated. Since it’s impossible to “blind” subjects as to whether they’ve meditated or not, I can’t think of a perfect way to correct for this. If meditation does make me faster at math, what are the mechanisms? For example, does it improve my speed at processing arithmetic problems, or my speed of recall at the ones that I knew from memory (e.g. times tables), or my decisiveness once I think I have an answer? It felt like the biggest factor was better focus. I wasn’t solving the problems faster so much as cutting down on the fractional seconds of distraction between them.

Improvements

It would have helped to have a longer first control period, as Seth and others advised me before I began. I was scheduled to present my results at this conference and at the time it was only a month away, so I decided to make the best of the time I had. Next time I’ll have a three- or four-week baseline period, especially if I’m including subjects who haven’t meditated before. The single biggest improvement would be to recruit non-meditators to follow the same protocol. Most of the other volunteers, like me, were interested because they were already positively disposed towards meditation as a daily habit. I don’t think they liked the idea of baseline periods when they couldn’t meditate, and this probably contributed to the dropout rate. (If I’d tried to put any of them in a baseline group that never meditated at all and just did math, I doubt any of that group would have finished.) It might be easier to recruit people who already use this app (or other math games) and get them to meditate than vice versa. That would also reduce the practice effect problem, and the effects of meditation might be stronger in people who are doing it for the first time. More difficult math problems might be a more sensitive measure, since I wouldn’t be answering them from memory. Nothing super-complex, just two- or three-digit numbers (253 + 178).

I’m planning to repeat this experiment myself at some point, and I’m also interested in aggregating data from others who do something similar, either in sync with me as above, or on your own timeline and protocol. I’d also appreciate suggestions for how to improve the experimental design.

Comment by Seth

The easiest way to improve this experiment would be to have longer phases. Usually you should run a phase until your measure stops changing and you have collected plenty of data during a steady state. (What “plenty of data” is depends on the strength of the treatment you are studying. Plenty of data might be 5 points or 20 points.) If it isn’t clear how long it will take to reach steady state, deciding in advance the length of a phase is not a good idea.

Another way to improve this experiment would be to do statistical tests that generate p values; this would give a better indication of the strength of the evidence. Because this experiment didn’t reach steady states, the best tests are complicated (e.g., comparison of slopes of fitted lines). With steady-state data, these tests are simple (e.g., comparison of means).

If you are sophisticated at statistics, you could look for a time-of-day effect (are tests later in the day faster?), a day-of-week effect, and so on. If these effects exist, their removal would make the experiment more sensitive. In my brain-function experiments, I use a small number of problems so that I can adjust for problem difficulty. That isn’t possible here.

These comments should not get in the way of noticing that the experiment answered the question Peter wanted to answer. I would follow up these results by studying similar treatments: listening to music for 10 minutes, sitting quietly for 10 minutes, and so on. To learn more about why meditation has an effect. The better you understand that, the better you can use it (make the effect larger, more convenient, and so on).

 

Brain Tracking: Early Experience

Brain tracking — frequent measurement of how well your brain is working — will become common, I believe, because brain function is important and because the brain is more sensitive to the environment (especially food) than the rest of the body. You will find it easier to decide what to eat if you measure your brain than if you measure other parts of your body. For example, I have used it to decide how much flaxseed and butter to eat. I have used R and the methodological wisdom of cognitive psychologists to make brain tracking tests. Alex Chernavsky, who lives in upstate New York, recently tried the most recent version:

In August, Seth solicited readers to help him test a new brain-tracking program. I said I was interested. I had a number of reasons for volunteering:

  • My job involves working a lot with computers, so I thought I had a decent shot at ferreting out any bugs or usability issues.
  • I have been tracking my weight daily for over eleven years, so I was confident that I would have enough motivation to do the test on a regular basis.
  • I have a long-standing interest in neuroscience, so I was eager to help advance the field, even if in a very small way.
  • I’m in my late 40s, and I’ve noticed a distinct increase in my forgetfulness. There are probably other, less-noticeable decreases in my cognitive function. Thus I have an interest in finding ways to boost the performance of my brain. Hacking brain function is obviously much easier if you can assay it via a quick, reliable proxy (i.e., reaction time).

The program itself was relatively easy to set up. The code is written in a free, open-source scripting language called R, so you have to install R on your Windows computer in order to run the program. Upon downloading the script (which is contained within an R workspace), you have to edit a function to specify the Windows folder that contains the workspace file. After that, you’re ready to go.

The three-month pilot study did not involve testing any hypotheses with regard to the effectiveness of interventions (for example, measuring reaction times before and after flaxseed oil). My task was simply to perform the test once or twice a day.

Taking the test involves hitting a number key (2 through 8, inclusive) to match a random target number that is displayed on the screen. The program measures the latency of your response. If you hit the wrong key, the program forces you to repeat the same trial until you press the right key. Reaction-times from these “correction trials” are not used in any subsequent data analysis. A session consists of 32 individual trials and takes about four minutes to complete.

I performed the test daily for three months, although I did miss two days. The test stopped short of being fun, but it was certainly not onerous. The biggest hassle was having to wait for my laptop to boot into Windows. If I had to do the pilot study over again, I would install R on both my home and my work desktop computers, so I could perform the test more easily (perhaps as a way to take a short break from whatever other task I happened to be working on).

The original plan was for me to email the R workspace to Seth once a week or so. However, I suggested to Seth that we could improve efficiency by using a shared DropBox folder. He agreed, and that is the method we adopted. Using this system, Seth had ongoing access to the latest data, and he could also easily make any bug-fixes or other edits that would take effect the next time I ran the script.

I did identify one bug in the script. After each trial, the script briefly displays some feedback in the form of your reaction-time (in milliseconds) for that trial, your cumulative average for that session, and a percentile figure that compares your latest speed with past trials for that same target key. I noticed that the percentile scores didn’t seem to make sense for some of the keys. Seth examined his code and agreed that this was indeed a bug. He made some adjustments and the bug was fixed.

I found that over time, as expected, my scores improved substantially. They seemed to plateau after six weeks. However, my accuracy suffered. During the third month of the pilot study, I made a conscious effort to reduce my error rate. I had some success, but I also found myself frustrated by my inability to reduce the errors as much as I would have liked. Making errors, despite my best efforts, was the only vexing part of taking the test.