Schizophrenia Research Forum - A Catalyst for Creative Thinking


Owen AM, Hampshire A, Grahn JA, Stenton R, Dajani S, Burns AS, Howard RJ, Ballard CG. Putting brain training to the test. Nature. 2010 Jun 10 ; 465(7299):775-8. Pubmed Abstract

Comments on News and Primary Papers
Comment by:  Robert Bilder, SRF Advisor (Disclosure)
Submitted 27 April 2010
Posted 27 April 2010

It’s wonderful to see this study in Nature, for it draws international attention to extremely important issues, including the degree to which cognitive training may yield generalizable effects, and to the amazing potential power of Web-based technologies to engage tens of thousands of individuals in behavioral research. It seems likely—and unfortunate—that for much of the world, the “take-home” message will be that all this “brain training” is bunk.

For me, the most exciting aspect of the study is that it was done at all. The basic design (engaging a TV audience to register for online experiments) is ingenious and indicates the awesome potential to use media for “good instead of evil.” Are there any investigators out there who would not be happy to recruit 52,617 research participants (presumably within the course of a single TV season)? Of course, this approach yielded only 11,430 people who completed the protocol (still sounds pretty good to me, especially since this reflects roughly 279,692 sessions completed). For those of us who struggle for years to obtain behavioral test data on several thousand research participants in our laboratories, this is a dream come true. Thus, I see the big contribution here as a validation of high-throughput behavioral phenotyping using the Internet, which will be necessary if we are to realize the promise of neuropsychiatric genetics.

The success of this validation experiment is obvious by looking at the specific effects (not the generalization effect), which showed effect sizes (Cohen’s d) ranging from 0.72 to 1.63. It would be of high value to see considerably more data on the within-subject variability of each training test score and on the covariance of different scores across sessions, not to mention other quality metrics. These include response time consistency, which should be used to assure “on-task” behavior and the validity of each session. Despite the lack of these details, the positive findings of these large effects means the results must be reasonably reliable (i.e., it is not feasible to get large and consistent effects from noise). This is very encouraging for those who want to see more Web-based behavioral phenotyping.

The “negative” results center on the lack of generalization to the “benchmark” tests, and this aspect of the outcome involves many more devils in the details. The authors are sensitive to many possibilities. The argument that there might be a misalignment of training with the benchmark tests is difficult to refute. The authors suggest that their 12 training tasks covered a “broad range of cognitive functions” and “are known to correlate highly with measures of general fluid intelligence or “g,” and were therefore most likely to produce an improvement in the general level of cognitive functioning” [italics added]. But this assertion is not logically sound. It is not unreasonable, but there is no necessary reason to suppose that the best way to improve general cognitive ability is to do so via general intellectual ability practice.

The suitability of the CANTAB (Cambridge Neuropsychological Test Automated Battery) tasks as benchmarks is also not an open-closed case. Yes, they are sensitive to brain damage and disruptive effects of various agents (for a comprehensive bibliography, see www.cantab.com/science/bibliography.asp). The data showing that these tasks can detect improvement in healthy people are more limited. One wonders if the guanfacine and clonidine improvement effects observed in the one study that is cited (Jakala et al., 1999a) would be seen in the sample of people who participated in the new study. Incidentally, it could be noted that the same authors also reported impairments on other cognitive tasks using the same agents (Jakala et al., 1999b; Jakala et al., 1999c). The bottom line is that it may be difficult to see big improvements, particularly in general cognitive ability, in people who are to some extent preselected for their healthy cognitive function and interest in “exercising” their brains.

Overall, I believe more research is needed to determine which aspects of cognitive function may be most trainable, in whom, and under what circumstances. I worry that this publication may end up derailing important studies that can shed light on these issues, since it is much easier to conclude that something is “bunk” than to push the envelope and systematically study all the reasons such a study could generate negative generalization results. Let us hope the baby is not thrown out with the bathwater at this early stage of investigation.

References:

Jäkälä P, Sirviö J, Riekkinen M, Koivisto E, Kejonen K, Vanhanen M, Riekkinen P Jr. Guanfacine and clonidine, alpha 2-agonists, improve paired associates learning, but not delayed matching to sample, in humans. Neuropsychopharmacology. 1999a Feb;20(2):119-30. Abstract

Jäkälä P, Riekkinen M, Sirviö J, Koivisto E, Kejonen K, Vanhanen M, Riekkinen P Jr. Guanfacine, but not clonidine, improves planning and working memory performance in humans. Neuropsychopharmacology. 1999b May;20(5):460-70. Abstract

Jäkälä P, Riekkinen M, Sirviö J, Koivisto E, Riekkinen P Jr. Clonidine, but not guanfacine, impairs choice reaction time performance in young healthy volunteers. Neuropsychopharmacology. 1999c Oct;21(4):495-502. Abstract

View all comments by Robert BilderComment by:  Philip Harvey
Submitted 27 April 2010
Posted 27 April 2010

The paper from Owen et al. reports that a sample of community dwellers recruited to participate in a cognitive remediation study did not improve their cognitive performance except on the tasks on which they trained. While the results of cognitive remediation studies in schizophrenia have been inconsistent, the results of this study are particularly difficult to interpret, for several reasons:

1. Baseline performance on the "benchmarking" assessment does not appear to be adjusted for age, education, and other demographic predictive factors. As a result, we do not know if the participants even had room to improve from baseline. It is possible that the volunteers in this study were very high performers at baseline and could not improve. Furthermore, if they are, in fact, high performers, their performance and the lack of any improvements with treatment may be irrelevant to poor performers.

2. There is no way to know if the research participants who completed the baseline and endpoint assessments were the same ones who completed the training. Without this control, which is provided in studies that directly observe subjects, there may be reason to be suspicious of the results.

3. Although this is a letter to the editor, methodological details are missing. Recent studies that reported successful cognitive remediation interventions have used dynamic titration to adjust task difficulty according to performance at the time of the training. While Owen et al. do not say, it seems unlikely that their study used dynamic titration. The use of this technique is the major difference between older, unsuccessful cognitive remediation interventions and recent, more successful ones delivered to people with schizophrenia (see McGurk et al., 2007 for a discussion).

4. Even more important is the small effect of the training and participation in general on changes in elements of the benchmarking assessment. These changes on half of the tests administered are smaller than those reported in simple retest assessments without cognitive training in people with schizophrenia (see Goldberg et al., 2007). Although the authors argue that these tests are known to be sensitive, this very small effect is particularly salient for paired associates learning. Thus, the issue of whether some of these tests are not sensitive to changes originating from either treatment or practice requires some consideration, particularly since we do not know how well the participants performed at baseline.

5. Most important, these data are likely to reflect substantial demand characteristics. A study put on by a show called Bang Goes The Theory certainly appears to pull for disconfirmation. It is possible that demand characteristics account for more variance than training since even successful training effects can be small. We know that environmental factors such as disability compensation account for more variance in real-world outcomes in schizophrenia than ability (Rosenheck et al., 2006); it would be no surprise if demand characteristics account for more variance than ability as well.

Thus, while these results generate the reasonable suggestion that participation through the Internet in cognitive remediation does not guarantee improved cognitive performance, the current research design does not address many important issues regarding cognitive enhancement in clinical populations.

References:

Goldberg TE, Goldman RS, Burdick KE, Malhotra AK, Lencz T, Patel RC, Woerner MG, Schooler NR, Kane JM, Robinson DG. Cognitive improvement after treatment with second-generation antipsychotic medications in first-episode schizophrenia: Is it a practice effect? Arch Gen Psychiatry. 2007 Oct;64:1115-22. Abstract

McGurk SR, Twamley EW, Sitzer DI, McHugo GJ, Mueser KT. A meta-analysis of cognitive remediation in schizophrenia. Am J Psychiatry. 2007;164:1791-1802. Abstract

Rosenheck R, Leslie D, Keefe R, McEvoy J, Swartz M, Perkins D, Stroup S, Hsiao JK, Lieberman J, CATIE Study Investigators Group. Barriers to employment for people with schizophrenia. Am J Psychiatry. 2006;163:411-7. Abstract

View all comments by Philip HarveyComment by:  Terry Goldberg
Submitted 7 May 2010
Posted 7 May 2010

This important paper by Owen and colleagues reads like a cautionary tale. In a Web-based study of over 11,000 presumptively healthy individuals, neither of two different types of cognitive training resulted in transfer of improvement to a reasoning task or to several well-validated cognitive tasks from the Cambridge Neuropsychological Test Automated Battery (CANTAB). I would like to point out three issues with the study.

First, the amount of training that individuals received at their own behest differed greatly. While the authors found no correlation between the number of training sessions and performance improvement or lack thereof, it is nevertheless possible that there is some critical threshold, either in number of sessions or, more importantly, time spent in sessions (not noted in the paper), that must be reached before transfer can occur. In other words, the relationship between training and transfer may be nonlinear and perhaps sigmoidal.

Second, it is possible that scores on some of the benchmark/transfer tasks were close to ceiling in this normal population, preventing gain. Perhaps more likely, they could have been close to floor (see Figure 1 in the paper; scores were seemingly quite low), making them insensitive to gain.

Last, as pointed out by Phil Harvey, the nature of the recruitment tool, a debunking TV show called Bang Goes the Theory, may have introduced a bias to disconfirm in the participants. This would be especially pertinent if participants understood the design of the study, which seems likely.

View all comments by Terry GoldbergComment by:  Angus MacDonald, SRF Advisor
Submitted 11 May 2010
Posted 11 May 2010

Owen and colleagues are to be commended for drawing attention to the great constraint of cognitive training—that is, the potential for improvements on only the restricted set of abilities that were trained.

This has been the bugbear of cognitive training for a long time. Short story with a purpose: In 2001, when I raved about the remarkable results of Klingberg (later published as Olesen et al., 2004) to John Anderson, an esteemed cognitive psychologist at Carnegie Mellon University, he scoffed at the possibility that Klingberg's training might have led to improvements on Raven's Matrices, a measure of generalized intelligence. "People have been looking into this for a century. If working memory training improved intelligence, schools would be filled with memory training courses rather than math and language courses," he said (or something to that effect). This issue of training and generalization is not new, and the results of Owen and colleagues are consistent with a large body of twentieth-century research.

Owen, therefore, reminds us of an important issue in the current generation of excitement about neuroplasticity: behavioral effects are likely to be small for distal generalization. The possibility of striking results is likely going to require something well beyond what is encountered in everyday or casual experience.

One way to improve on casual experience is dynamic titration. It is reasonably well established that when faced with a task of fixed difficulty, people will begin to asymptote on accuracy and then get faster and faster, with no hope of generalization. The online methods are mum about how this concern was addressed. (One certainly hopes that it was.)

We have recently demonstrated that dynamic titration on an n-back task, in the context of a broader working memory training, can provide local generalization (Haut et. al, 2010). In that study, we examined changes in prefrontal cortical activity with cognitive training compared to an active placebo control in schizophrenia patients. We found that training had provided a stimulus-general improvement in the trained task, and that this improvement mapped onto greater frontopolar and dorsolateral prefrontal cortex activity. This result was therefore quite similar to that reported in healthy adults by Olesen (Olesen et al., 2004). I don't think I'll press for working memory training courses in my local school yet, but Owen won't be the reason.

References:

Olesen PJ, Westerberg H, Klingberg T. Increased prefrontal and parietal activity after training of working memory. Nat Neurosci. 2004;7(1):75-9. Epub 2003 Dec 14. Abstract

Haut KM, Lim KO, MacDonald AW III. Prefrontal cortical changes following cognitive training in patients with chronic schizophrenia: effects of practice, generalization and specificity. Neuropsychopharmacology. 2010 Apr 28. Abstract

View all comments by Angus MacDonald