Posit Science: Does It Help?

Tim Lundeen pointed me to the website of Posit Science, which sells ($10/month) access to a bunch of exercises that supposedly improve various brain functions, such as memory, attention, and navigation. I first encountered Posit Science at a booth at a convention for psychologists about five years ago. They had reprints available. I looked at a study published in the Proceedings of the National Academy of Sciences. I was surprised how weak was the evidence that their exercises helped.

Maybe the evidence has improved. Under the heading “world class science” the Posit Science website emphasizes a few of the 20-odd published studies. First on their list of “peer-reviewed research” is “the IMPACT study”, which has its own web page.

With 524 participants, the IMPACT study is the largest clinical trial ever to examine whether a specially designed, widely available cognitive training program significantly improves cognitive abilities in adults. Led by distinguished scientists from Mayo Clinic and the University of Southern California, the IMPACT study proves that people can make statistically significant gains in memory and processing speed if they do the right kind of scientifically designed cognitive exercises.

The study compared a few hundred people who got the Posit Science exercises with a few hundred people who got an “active control” treatment that is poorly described. It is called “computer-based learning”. I couldn’t care less that people who spend an enormous amount of time doing laboratory brain tests (1 hour/day, 5 days/week, 8-10 weeks) thereby do better on other laboratory brain tests. I wanted to know if the laboratory training produced improvement in everyday life. This is what most people want to know, I’m sure. The study designers seem to agree. The procedure description says “to be of real value to users, improvement on a training program must generalize to improvement on real-world activities”.

On the all-important question of real-world improvement, the results page said very little. I looked for the published paper. I couldn’t find it on the website. Odd. I found it on Scribd.

Effect of the training on real-world activities was measured like this:

The CSRQ-25 consists of 25 statements about cognition and mood in everyday life over the past 2 weeks, answered using a 5-point Likert scale.

Mood? Why was that included? In any case, the training group started with an average score of 2.23 on the CSRQ-25. After training, they improved by 0.07. (Significantly more than the control group.) Not only is that a tiny improvement (percentage-wise) it is unclear what it means. The measurement scale is not well-described. Was the range of possible answers 1 to 5? Or 0 to 4? What does 2 mean? What does 3 mean? It is clear, however, that on a scale where the greatest possible improvement was either 1.23 (assuming 1 was the best possible score) or 2.23 (assuming 0 was the best possible score), the actual improvement was 0.07. Not much for 50-odd hours of practice. Although the website seems proud of the large sample size (“largest clinical trial ever”), it is now clear why it was so large: With a smaller sample the tiny real-world improvement would have been undetectable. Because the website treats this as the best evidence, I assume the other evidence is even less impressive. The questions about mood are irrelevant to the website claims, which are all about cognition. Why weren’t the mood questions removed from the analysis? It is entirely possible that, had the mood questions been removed, the training would have produced no improvement.

The first author of the IMPACT study is Glenn Smith, who works at the Mayo Clinic. I emailed him to ask (a) why the assessment of real-world effects included questions about mood and (b) what happens if the mood questions are removed. I predict he won’t answer. A friend predicts he will.

More questions for Posit Science

Assorted Links

Thanks to Dave Lull.

Movie Grosses and Nobel Prizes

In Edward Jay Epstein’s new piece Gross Misunderstanding, in the Columbia Journalism Review, he writes

By focusing on the box-office race that is spoon-fed to them each week, journalists may entertain their audiences, but they are missing the real story.

Something similar happens with the Nobel Prizes. Journalists print what they are told — Scientists X and Y did beautiful “pure science” about this or that — and thereby miss the real story. In the case of Nobel Prizes in Medicine, the real story is the long-running lack of progress on major diseases (cancer, heart disease, depression, etc.).

Vitamin D3 in Morning Improves Sleep After All (Story 26)

Adam Clemans (28 years old, about 80 kg, pharmacist, lives in Shanghai) commented on a recent post that Vitamin D3 didn’t seem to improve his sleep (“I can’t say I noticed any improvement in my sleep from Vitamin D”). He took 4000 IU in drop form right after he woke up.

I wrote him for details. I said that since 4000 IU was the lowest dose I found effective, he might want to try a higher dose. Adam answered my questions and said he would try a higher dose. Two weeks later he wrote again:

I started taking 4 drops (8000 IU) of Vitamin D3 1st thing in the morning (up from 2 drops or 4000 IU); my sleep seemed to improve immediately and quite dramatically. I had been struggling with middle-of-the-night awakening for a week or so, but after the change I slept like a brick or a baby (pick your metaphor). I would like to experiment with this more before I say I am sold on it, but for now it seems to be working well.

He’d been doing the higher dose for two weeks. Hard to explain as a placebo effect.