Tim Lundeen pointed me to the website of Posit Science, which sells ($10/month) access to a bunch of exercises that supposedly improve various brain functions, such as memory, attention, and navigation. I first encountered Posit Science at a booth at a convention for psychologists about five years ago. They had reprints available. I looked at a study published in the Proceedings of the National Academy of Sciences. I was surprised how weak was the evidence that their exercises helped.
Maybe the evidence has improved. Under the heading “world class science” the Posit Science website emphasizes a few of the 20-odd published studies. First on their list of “peer-reviewed research” is “the IMPACT study”, which has its own web page.
With 524 participants, the IMPACT study is the largest clinical trial ever to examine whether a specially designed, widely available cognitive training program significantly improves cognitive abilities in adults. Led by distinguished scientists from Mayo Clinic and the University of Southern California, the IMPACT study proves that people can make statistically significant gains in memory and processing speed if they do the right kind of scientifically designed cognitive exercises.
The study compared a few hundred people who got the Posit Science exercises with a few hundred people who got an “active control” treatment that is poorly described. It is called “computer-based learning”. I couldn’t care less that people who spend an enormous amount of time doing laboratory brain tests (1 hour/day, 5 days/week, 8-10 weeks) thereby do better on other laboratory brain tests. I wanted to know if the laboratory training produced improvement in everyday life. This is what most people want to know, I’m sure. The study designers seem to agree. The procedure description says “to be of real value to users, improvement on a training program must generalize to improvement on real-world activities”.
On the all-important question of real-world improvement, the results page said very little. I looked for the published paper. I couldn’t find it on the website. Odd. I found it on Scribd.
Effect of the training on real-world activities was measured like this:
The CSRQ-25 consists of 25 statements about cognition and mood in everyday life over the past 2 weeks, answered using a 5-point Likert scale.
Mood? Why was that included? In any case, the training group started with an average score of 2.23 on the CSRQ-25. After training, they improved by 0.07. (Significantly more than the control group.) Not only is that a tiny improvement (percentage-wise) it is unclear what it means. The measurement scale is not well-described. Was the range of possible answers 1 to 5? Or 0 to 4? What does 2 mean? What does 3 mean? It is clear, however, that on a scale where the greatest possible improvement was either 1.23 (assuming 1 was the best possible score) or 2.23 (assuming 0 was the best possible score), the actual improvement was 0.07. Not much for 50-odd hours of practice. Although the website seems proud of the large sample size (“largest clinical trial ever”), it is now clear why it was so large: With a smaller sample the tiny real-world improvement would have been undetectable. Because the website treats this as the best evidence, I assume the other evidence is even less impressive. The questions about mood are irrelevant to the website claims, which are all about cognition. Why weren’t the mood questions removed from the analysis? It is entirely possible that, had the mood questions been removed, the training would have produced no improvement.
The first author of the IMPACT study is Glenn Smith, who works at the Mayo Clinic. I emailed him to ask (a) why the assessment of real-world effects included questions about mood and (b) what happens if the mood questions are removed. I predict he won’t answer. A friend predicts he will.