Schizophrenia Research Forum - A Catalyst for Creative Thinking

Targeted Cognitive Training and Real-World Outcomes in Schizophrenia

3 August 2009. New research suggests that cognitive training can achieve impressive results in schizophrenia, although some antipsychotic drugs may limit its effectiveness, and societal influences may shape real-world outcomes. In two new studies, Sophia Vinogradov, of the University of California at San Francisco, and colleagues find that a neuroscience-based training program substantially improves cognitive outcomes in patients with schizophrenia, producing gains that may last six months. In a third study, Vinogradov and associates warn that anticholinergic effects of antipsychotic drugs may undermine the training. A person’s environment might, too; a team led by Philip Harvey, of the Emory University School of Medicine, finds that most subjects with schizophrenia live independently in rural Sweden but not in New York City, despite similar ability levels.

Cognitive deficits often appear before, and independently of, other symptoms of schizophrenia (for a review of cognition in schizophrenia, see Bowie and Harvey, 2005; also see SRF related news story; SRF news story). They respond only modestly, if at all, to antipsychotic drugs (see SRF related news story), making training designed to improve their cognitive processes or to compensate for impaired functions an attractive option. According to a 2007 meta-analysis (McGurk et al., 2007), such training can moderately improve cognitive performance, but may fail to improve patients’ real-world functioning unless they also receive treatment that targets their psychosocial functioning.

Perceive it to retrieve it
Vinogradov's group thought that cognitive training might work better if it took its cue from the latest neuroscience findings (see SRF live discussion). In their paper in the July American Journal of Psychiatry, first author Melissa Fisher of the University of California at San Francisco and colleagues write, “Prior cognitive remediation, approaches have not specifically targeted impaired perceptual processes, although a growing body of research has identified a number of early sensory deficits in schizophrenia and has related them to higher-order cognitive impairments” (see SRF related news story).

Fisher and colleagues designed the program to target early auditory and working memory processes, with the hope of also enhancing verbal memory and overall cognition. As they explain it, “The basic notion is that by improving the speed and accuracy of information processing in the auditory system, higher-order functions such as verbal encoding and verbal memory retrieval have more reliable signals on which to operate.”

The study examined 55 outpatients with chronic schizophrenia who had undergone randomization to either the training program or a control condition. The stand-alone cognitive training challenged subjects to make progressively harder auditory distinctions as their performance improved. Correct responses earned “rewards” of points and animations. Since studies suggest that people with schizophrenia can benefit from practice (McGurk et al., 2007), those in the training program received an hour of computer-based training daily, five days a week, until they had received 50 hours total. Control subjects played computer games for the same amount of time.

The researchers chose most of their cognitive measures based on the recommendations of the (MATRICS [Measurement and Treatment Research to Improve Cognition in Schizophrenia]; for information specifically about the cognitive battery, see Nuechterlein et al., 2008). Even though both subject groups began the study with similar cognitive performance, over time the training group improved more than controls in verbal working memory, verbal learning, verbal memory, and global cognition. Training-related effect sizes topped 0.85 for global cognition, verbal learning, and verbal memory, surpassing those found in the earlier meta-analysis (McGurk et al., 2007).

Fisher and colleagues offer an explanation for the program’s apparent success: “As the auditory cortex responds to the psychophysical training, a more salient verbal signal is ‘fed forward’ into working memory operations; this then permits more efficient and accurate encoding of the verbal information.” In a commentary in the same issue, Michael Green of the University of California, Los Angeles, commends the researchers for basing the intervention on neuroplasticity models and for using MATRICS-recommended measures of cognition, which should facilitate comparing their results with those from other studies (see SRF related news story). Despite these strengths, he wondered whether the program would help patients function in the real world.

The nitty-gritty
Along with the training’s effects on functional outcomes, other questions hung in the air: How long would the benefits last? Would further training help patients more? In a follow-up study published online by Schizophrenia Bulletin on March 5, first author Fisher, Vinogradov, and colleagues addressed these questions.

The study focused on 32 clinically stable patients with schizophrenia who had been randomly assigned to either the cognitive training or control conditions described above. After they finished their 50 sessions of auditory training, 10 of the 22 subjects in the training group received 50 more hours of training. This added training aimed to improve visual processing and cognitive control.

From baseline to post-training to the six-month follow-up, the training group's performance improved more than that of control subjects on verbal learning and memory and on cognitive control. Verbal memory did not improve in this study, contrary to the earlier one. All domains except verbal memory showed a large, positive effect of training from baseline to the post-training assessment, and a medium-to-large effect from baseline to six months later.

When they looked at the dose-response relationship, Fisher and colleagues noted that only the group that completed 100 training sessions improved more than controls on global cognition and processing speed at six months. “Thus, it appears that a longer training period, or additional training of visual and cognitive control processes, may be required to drive improvements in speed of processing,” they write.

Treatment tradeoffs
Unfortunately, patients may not reap the full benefits of cognitive training if they are taking antipsychotic medications that cause anticholinergic effects. Acetylcholine dysfunction, thought to loom large in Alzheimer’s disease, may also contribute to schizophrenia-related cognitive deficits (Gray and Roth, 2007). In the July 1 American Journal of Psychiatry in Advance, Vinogradov and colleagues describe the effects of anticholinergic activity on training outcome in 49 outpatients with schizophrenia.

At baseline, higher serum anticholinergic activity correlated with worse performance on tests of verbal working memory, and verbal learning and memory, but not on other MATRICS-based cognitive measures. After 50 sessions of auditory training, anticholinergic activity correlated with lesser gains in global cognition. In fact, serum anticholinergic activity explained 20 percent of the variance in global cognition change, more than age, intelligence, or symptom severity.

“It appears that if we wish to maximize patients’ response to rehabilitation, we must take care to minimize their anticholinergic burden,” the researchers conclude. They suggest that clinicians weigh the anticholinergic effects of drugs such as clozapine, olanzapine, quetiapine, benztropine, and some first-generation antipsychotics before prescribing them.

Worldly influences
Improving patients’ lives might require more than skills training; after all, what people can do and what they actually do often differ. In a 1997 study of older patients with schizophrenia in New York and London, Harvey and colleagues found evidence that environmental factors lead to divergent outcomes in subjects with similar levels of impairment (Harvey et al., 1997).

In the July American Journal of Psychiatry, another Harvey-led team presents findings of a recent cross-national study. It examined functional capacity—the ability to perform daily living activities—and real-life outcomes in outpatients with schizophrenia or schizoaffective disorder who lived in vastly different places. It compared 244 subjects in New York City or its close suburbs with 146 subjects in a mostly rural part of Sweden called Trollhättan.

To measure functional capacity, the study used two subscales of the University of California, San Diego, Performance-Based Skills Assessment (UPSA) battery. These UPSA-B subscales, which parallel UPSA total scores, assess the ability to manage money and communicate. Information about real-world outcomes came from subjects themselves, their case managers, and their charts. It included whether subjects had reached milestones such as living independently and being at least partly financially responsible, working for pay, and being married or widowed.

Results showed that the two groups resembled each other in functional capacity. Even so, and despite similar vocational and social outcomes, their residential outcomes diverged drastically. In particular, 80 percent of the Swedish patients lived independently and at least contributed to their own financial support, versus only 46 percent of the New Yorkers. The New York patients were much likelier than their counterparts in rural Sweden to live in restricted settings. In the Swedish group, UPSA-B scores did not differentiate those living independently, in a restricted setting, or in a non-restricted setting without financial responsibility, but they did in the United States.

As to the specific environmental factors that might cause different life stories to unfold in patients with similar abilities, Harvey and colleagues point to differences in social service systems in the two places. In Trollhättan, disabled persons with schizophrenia receive more generous financial aid from the government than do those living in the costly New York area. Harvey and colleagues write, “These differences in social support for people with mental illness have a clear and strong signal in terms of real-world functional outcomes.” Such environmental differences might even determine the real-life effects of cognitive training.—Victoria L. Wilcox.

References:
Fisher M, Holland C, Merzenich MM, Vinogradov S. Using neuroplasticity-based auditory training to improve verbal memory in schizophrenia. Am J Psychiatry. 2009 July;166(7):805-811. Abstract

Green MF. New possibilities in cognition enhancement for schizophrenia. Am J Psychiatry. 2009 July;166(7):749-752. Abstract

Fisher M, Holland C, Subramaniam K, Vinogradov S. Neuroplasticity-based cognitive training in schizophrenia: an interim report on the effects 6 months later. Schizophrenia Bulletin Advance Access. 2009, March 5. Abstract

Vinogradov S, Fisher M, Warm H, Holland C, Kirshner MA, Pollock BG. The cognitive cost of anticholinergic burden: Decreased response to cognitive training in schizophrenia. Am J Psychiatry in Advance. 2009, July 1. Abstract

Harvey PD, Helldin L, Bowie CR, Heaton RK, Olsson A-K, Hjärthag F, Norlander T, Patterson TL. Performance-based measurement of functional disability in schizophrenia: A cross-national study in the United States and Sweden. Am J Psychiatry. 2009 July; 166(7):821-827. Abstract

Comments on Related News


Related News: Antipsychotics and Cognition: Practice Makes Perfect Confounder

Comment by:  Richard Keefe
Submitted 12 October 2007
Posted 12 October 2007

As stated in the CATIE and CAFÉ neurocognition manuscripts, it is possible that the small improvements in neurocognitive performance following randomization to one of the antipsychotic treatments in these studies are due solely to practice effects or expectation biases. This statement is affirmed by the excellent recent study by Goldberg et al. in which improvements in cognitive performance were almost identical in magnitude to the practice effects found in healthy controls. While these data may be perhaps disappointing to the hope that second-generation medications improve cognition, they may also suggest that cognitive performance is less recalcitrant to change than previously expected.

In the context of a double-blind study design, the degree of cognitive enhancement observed for each treatment group is a function of three major variables: treatment effect, placebo effect, and practice effect. In studies of antipsychotic medications without a placebo control group, practice and placebo effects in schizophrenia cannot be disentangled from treatment effects. They also cannot be disentangled from each other. Recent data from a double-blind study comparing the effects of donepezil hydrochloride and placebo in a highly refined sample of 226 patients with schizophrenia stabilized while taking second-generation antipsychotics suggested that patients taking placebo had neurocognitive effect size improvements (0.22 SD after being tested twice over 6 weeks; 0.45 SD after the third assessment at 12 weeks) on the same test battery used in the CATIE and CAFÉ studies, suggesting a practice or placebo effect (Keefe et al., Neuropsychopharmacology, in press) consistent with the improvements reported in the CATIE and CAFÉ treatment studies. These cognitive improvements are in contrast to test-retest data collected in patients with schizophrenia tested with the MATRICS Consensus Cognitive Battery (MCCB; Nuechterlein et al., in press) and the Brief Assessment of Cognition in Schizophrenia (BACS; Keefe et al., 2004), which showed very little practice effects. The contrast of the data from these test-retest studies that did not involve the initiation of new treatments with cognitive improvements following the initiation of antipsychotic treatment or placebo suggests that attribution biases beyond simple practice effects may be at work.

Test-retest data from patients tested twice within a briefer period than the test interval in the four treatment studies discussed above suggest that schizophrenia patients demonstrate relatively small improvements in executive functions (Keefe et al., 2004; Nuechterlein et al., in press) and the WAIS digit-symbol test (Nuechterlein et al., in press), and medium improvements on tests of verbal memory only when identical versions are repeated (Hawkins and Wexler, 1999; Keefe et al., 2004) but not on tests of verbal fluency (Keefe et al., 2004; Nuechterlein et al., in press). In the donepezil/placebo study, patients who received placebo improved substantially across several cognitive domains. Although not tested directly, this series of results suggests that the magnitude of placebo effects in cognitive enhancement trials may exceed the reported size of practice-related improvements in studies of schizophrenia patients tested twice without the prospect of the initiation of a cognitive intervention.

The greater improvements in cognition found in the context of a placebo-controlled trial could be due to a variety of psychological factors. When a patient enters into a trial or is treated with a medication that is believed to contribute beneficially to cognitive performance, rater bias and expectation bias can have strong effects on performance. Patients who are told that their cognitive abilities might improve may be able to perform better on the test batteries used in the study simply because their expectations become more optimistic. Second, testers who believe that a patient will have cognitive improvement, or hope for such improvement, could administer the tests in a more hopeful, positive manner, which can help the patient raise his or her expectations for performance and thus engage motivational systems that were previously disengaged (Keefe, 2006). Such expectation bias can also lead to inaccuracies in scoring; since many cognitive tests require the use of judgment to determine final scores, hopeful testers are more likely to give the “benefit of the doubt” to patients after they have entered into a study in which the treatment is potentially cognitively enhancing. Third, this same type of expectation could have an impact on the support that a patient receives in his or her community/living situation. If the people who interact regularly with the patient begin looking for better performance on cognitively related tasks, these expectations could become self-fulfilling in that they may raise the confidence and motivation of the patient to perform well on such tasks, including cognitive testing.

The factors associated with improvement during a placebo-controlled trial are indeed complex, and it is difficult to distinguish practice effects from placebo effects. However, the relatively small clinical improvement in test-retest designs without treatment or placebo intervention suggests that any potential practice effects may at least be potentiated by placebo effects.

The implications for this series of results include a methodological caution and a reason for optimism. Regarding the caution, future trials of cognitive-enhancing compounds might need to be designed in such a way that practice and placebo are reduced. Very few treatment studies of patients with schizophrenia have employed a priori methodological strategies to reduce the magnitude of potential practice effects, such as the use of a placebo run-in period with one or more administrations of the cognitive battery prior to randomization. Regarding the optimism, these studies suggest that schizophrenia cognition (perhaps especially when freed from the dampening effects of large doses of high potency medications such as haloperidol) could be more plastic that had been previously assumed; it is possibly as sensitive to experience-dependent learning in schizophrenia patients as healthy controls, and it may benefit from improved psychological expectations. While this is a methodological nuisance for clinical trial designs, it may also reveal an unexpectedly large potential gain for psychological interventions such as cognitive remediation, cognitive-behavioral therapy, and even encouragement.

References:

Goldberg TE, Goldman RS, Burdick KE, Malhotra AK, Lencz T, Patel RC, Woerner MG, Schooler NR, Kane JM, Robinson DG. Cognitive improvement after treatment with second-generation antipsychotic medications in first-episode schizophrenia: Is it a practice effect? Arch Gen Psychiatry. 2007 Oct;64:1115-1122. Abstract

Hawkins KA, Wexler BE (1999). California Verbal Learning Test practice effects in a schizophrenia sample. Schizophr Res 39: 73-78. Abstract

Keefe RSE. Missing the sweet spot: Disengagement in schizophrenia. Psychiatry, 2006; 3: 36-41.

Keefe RSE, Malhotra AK, Meltzer H, Kane JM, Buchanan RW, Murthy A, Sovel M, Li, C, Goldman R. Efficacy and safety of donepezil in patients with schizophrenia or schizoaffective disorder: Significant placebo/practice effects in a 12-week, randomized, double-blind, placebo-controlled trial. Neuropsychopharmacology, 2007 [Epub ahead of print]. Abstract

Keefe RSE¸ Goldberg TE, Harvey PD, Gold JM, Poe M, Coughenour L. The Brief Assessment of Cognition in Schizophrenia: Reliability, sensitivity, and comparison with a standard neurocognitive battery. Schizophrenia Research, 2004; 68: 283-297. Abstract

Nuechterlein KH, Green MF, Kern RS, Baade LE, Barch D, Cohen J, Essock S, Fenton WS, Frese FJ, Gold JM, Goldberg T, Heaton R, Keefe RSE, Kraemer H, Mesholam-Gately R, Seidman LJ, Stover E, Weinberger D, Young AS, Zalcman S, Marder SR. The MATRICS consensus cognitive battery: Part 1. Test selection, reliability, and validity. The American Journal of Psychiatry (in press).

View all comments by Richard Keefe

Related News: Antipsychotics and Cognition: Practice Makes Perfect Confounder

Comment by:  Narsimha Pinninti (Disclosure)
Submitted 15 October 2007
Posted 15 October 2007
  I recommend the Primary Papers

This article questions the prevailing notion that antipsychotic medication (particularly second-generation antipsychotics) improve cognitive functioning in individuals with schizophrenia. As the authors rightly note, practice effects should be taken into account before attributing improvements to drug effects.

View all comments by Narsimha Pinninti

Related News: Antipsychotics and Cognition: Practice Makes Perfect Confounder

Comment by:  Saurabh Gupta
Submitted 15 October 2007
Posted 15 October 2007
  I recommend the Primary Papers

I propose that future studies should use computational cognitive assessment tools like CANTAB or CogTest, which have at least two advantages. These tools have multiple similar test modules, so on each testing during one study, participants get a similar but not the same test to assess the same cognitive function. Besides, computational assessment also reduces chances of subjective bias on the part of investigator.

References:

Levaux MN, Potvin S, Sepehry AA, Sablier J, Mendrek A, Stip E. Computerized assessment of cognition in schizophrenia: promises and pitfalls of CANTAB. Eur Psychiatry. 2007 Mar;22(2):104-15. Review. Abstract

View all comments by Saurabh Gupta

Related News: Antipsychotics and Cognition: Practice Makes Perfect Confounder

Comment by:  Sebastian Therman
Submitted 17 October 2007
Posted 17 October 2007

One remedy would be repeated practice over time before the actual baseline, sufficient to reach asymptotic ability. Computerized testing of reaction time measures, short-term memory span, etc. would all be quite cheap and easy to implement, for example, as a weekly session.

View all comments by Sebastian Therman

Related News: Antipsychotics and Cognition: Practice Makes Perfect Confounder

Comment by:  Andrei Szoke
Submitted 1 November 2007
Posted 5 November 2007
  I recommend the Primary Papers

We recently completed a meta-analysis on "Longitudinal studies of cognition in schizophrenia" (to be published in the British Journal of Psychiatry) based on 53 studies providing data for 31 cognitive variables. When enough data were available (19 variables from eight cognitive tests), we compared the results of schizophrenic participants to those of normal controls.

Given the differences in methods and the fact that most of the studies included in our meta-analysis reported results of patients being past their first episode (FE), it is surprising how close our results and conclusions are compared to those of Goldberg et al. In our analysis we found that, with two exceptions (semantic verbal fluency and Boston naming test, which were stable), participants with schizophrenia improved their performances. The improvement was statistically significant for 19 variables (out of 29). However, controls also showed improvement in most of the variables due to the practice effect. A significant improvement (definite practice effect) was present for 10 variables, an improvement that did not reach significance (possible practice effect) was present in six more variables, and three variables showed no improvement. When compared with schizophrenic patients, controls showed similar improvement for 11 variables, significantly more improvement for seven variables (six of them from the “definite practice effect” group, one from the “possible practice effect”) and for one variable less improvement (the Stroop interference score). Thus, these results suggest that for most of the cognitive variables, improvement seen in schizophrenic subjects does not exceed improvement due to the practice effect.

It is interesting to mention that in our analysis only two variables improved significantly more when patients had a change in their medication from first-generation antipsychotics (FGAs) to second-generation antipsychotics (SGAs). These variables were time to complete TMT B and the delayed recall of the Visual Reproduction (from the WMS). In the Goldberg et al. study the only two tests that showed more improvement in schizophrenic subjects than in controls were also the TMT and visual reproduction. Although in our study schizophrenic subjects did not improve more than controls, the two results (Goldberg’s and ours) taken together could be an indirect argument for a differential, specific effect of SGAs on those two (visuo-spatial) tasks. The placebo effect—see the comment by Richard Keefe—could explain why improvement in the study by Goldberg et al. was greater than in our meta-analysis. Studies of effects of changing medication in the opposite direction, from SGAs to FGAs, could contribute to validate or invalidate these hypotheses.

Goldberg et al. suggested that there could be a set of task characteristics that could be used to develop tasks resistant to the practice effect. Our own results are less optimistic as they show that phonemic verbal fluency, despite a very similar format, does not share the “practice resistance” with the semantic verbal fluency. However, we think that there is already a wealth of data that could be used to select the best cognitive tests. An alternative solution is the use of scales and questionnaires for evaluating cognition (that are sensible to the placebo effect but not to the practice effect).

References:

Szoke A, Trandafir A, Dupont M-E, Meary A, Schurhoff F, Leboyer M. Longitudinal studies of cognition in schizophrenia. British Journal of Psychiatry (in press).

View all comments by Andrei Szoke

Related News: Antipsychotics and Cognition: Practice Makes Perfect Confounder

Comment by:  Patricia Estani
Submitted 7 November 2007
Posted 8 November 2007
  I recommend the Primary Papers

Related News: Brain Training Falls Short in Big Online Experiment

Comment by:  Robert Bilder, SRF Advisor (Disclosure)
Submitted 27 April 2010
Posted 27 April 2010

It’s wonderful to see this study in Nature, for it draws international attention to extremely important issues, including the degree to which cognitive training may yield generalizable effects, and to the amazing potential power of Web-based technologies to engage tens of thousands of individuals in behavioral research. It seems likely—and unfortunate—that for much of the world, the “take-home” message will be that all this “brain training” is bunk.

For me, the most exciting aspect of the study is that it was done at all. The basic design (engaging a TV audience to register for online experiments) is ingenious and indicates the awesome potential to use media for “good instead of evil.” Are there any investigators out there who would not be happy to recruit 52,617 research participants (presumably within the course of a single TV season)? Of course, this approach yielded only 11,430 people who completed the protocol (still sounds pretty good to me, especially since this reflects roughly 279,692 sessions completed). For those of us who struggle for years to obtain behavioral test data on several thousand research participants in our laboratories, this is a dream come true. Thus, I see the big contribution here as a validation of high-throughput behavioral phenotyping using the Internet, which will be necessary if we are to realize the promise of neuropsychiatric genetics.

The success of this validation experiment is obvious by looking at the specific effects (not the generalization effect), which showed effect sizes (Cohen’s d) ranging from 0.72 to 1.63. It would be of high value to see considerably more data on the within-subject variability of each training test score and on the covariance of different scores across sessions, not to mention other quality metrics. These include response time consistency, which should be used to assure “on-task” behavior and the validity of each session. Despite the lack of these details, the positive findings of these large effects means the results must be reasonably reliable (i.e., it is not feasible to get large and consistent effects from noise). This is very encouraging for those who want to see more Web-based behavioral phenotyping.

The “negative” results center on the lack of generalization to the “benchmark” tests, and this aspect of the outcome involves many more devils in the details. The authors are sensitive to many possibilities. The argument that there might be a misalignment of training with the benchmark tests is difficult to refute. The authors suggest that their 12 training tasks covered a “broad range of cognitive functions” and “are known to correlate highly with measures of general fluid intelligence or “g,” and were therefore most likely to produce an improvement in the general level of cognitive functioning” [italics added]. But this assertion is not logically sound. It is not unreasonable, but there is no necessary reason to suppose that the best way to improve general cognitive ability is to do so via general intellectual ability practice.

The suitability of the CANTAB (Cambridge Neuropsychological Test Automated Battery) tasks as benchmarks is also not an open-closed case. Yes, they are sensitive to brain damage and disruptive effects of various agents (for a comprehensive bibliography, see www.cantab.com/science/bibliography.asp). The data showing that these tasks can detect improvement in healthy people are more limited. One wonders if the guanfacine and clonidine improvement effects observed in the one study that is cited (Jakala et al., 1999a) would be seen in the sample of people who participated in the new study. Incidentally, it could be noted that the same authors also reported impairments on other cognitive tasks using the same agents (Jakala et al., 1999b; Jakala et al., 1999c). The bottom line is that it may be difficult to see big improvements, particularly in general cognitive ability, in people who are to some extent preselected for their healthy cognitive function and interest in “exercising” their brains.

Overall, I believe more research is needed to determine which aspects of cognitive function may be most trainable, in whom, and under what circumstances. I worry that this publication may end up derailing important studies that can shed light on these issues, since it is much easier to conclude that something is “bunk” than to push the envelope and systematically study all the reasons such a study could generate negative generalization results. Let us hope the baby is not thrown out with the bathwater at this early stage of investigation.

References:

Jäkälä P, Sirviö J, Riekkinen M, Koivisto E, Kejonen K, Vanhanen M, Riekkinen P Jr. Guanfacine and clonidine, alpha 2-agonists, improve paired associates learning, but not delayed matching to sample, in humans. Neuropsychopharmacology. 1999a Feb;20(2):119-30. Abstract

Jäkälä P, Riekkinen M, Sirviö J, Koivisto E, Kejonen K, Vanhanen M, Riekkinen P Jr. Guanfacine, but not clonidine, improves planning and working memory performance in humans. Neuropsychopharmacology. 1999b May;20(5):460-70. Abstract

Jäkälä P, Riekkinen M, Sirviö J, Koivisto E, Riekkinen P Jr. Clonidine, but not guanfacine, impairs choice reaction time performance in young healthy volunteers. Neuropsychopharmacology. 1999c Oct;21(4):495-502. Abstract

View all comments by Robert Bilder

Related News: Brain Training Falls Short in Big Online Experiment

Comment by:  Philip Harvey
Submitted 27 April 2010
Posted 27 April 2010

The paper from Owen et al. reports that a sample of community dwellers recruited to participate in a cognitive remediation study did not improve their cognitive performance except on the tasks on which they trained. While the results of cognitive remediation studies in schizophrenia have been inconsistent, the results of this study are particularly difficult to interpret, for several reasons:

1. Baseline performance on the "benchmarking" assessment does not appear to be adjusted for age, education, and other demographic predictive factors. As a result, we do not know if the participants even had room to improve from baseline. It is possible that the volunteers in this study were very high performers at baseline and could not improve. Furthermore, if they are, in fact, high performers, their performance and the lack of any improvements with treatment may be irrelevant to poor performers.

2. There is no way to know if the research participants who completed the baseline and endpoint assessments were the same ones who completed the training. Without this control, which is provided in studies that directly observe subjects, there may be reason to be suspicious of the results.

3. Although this is a letter to the editor, methodological details are missing. Recent studies that reported successful cognitive remediation interventions have used dynamic titration to adjust task difficulty according to performance at the time of the training. While Owen et al. do not say, it seems unlikely that their study used dynamic titration. The use of this technique is the major difference between older, unsuccessful cognitive remediation interventions and recent, more successful ones delivered to people with schizophrenia (see McGurk et al., 2007 for a discussion).

4. Even more important is the small effect of the training and participation in general on changes in elements of the benchmarking assessment. These changes on half of the tests administered are smaller than those reported in simple retest assessments without cognitive training in people with schizophrenia (see Goldberg et al., 2007). Although the authors argue that these tests are known to be sensitive, this very small effect is particularly salient for paired associates learning. Thus, the issue of whether some of these tests are not sensitive to changes originating from either treatment or practice requires some consideration, particularly since we do not know how well the participants performed at baseline.

5. Most important, these data are likely to reflect substantial demand characteristics. A study put on by a show called Bang Goes The Theory certainly appears to pull for disconfirmation. It is possible that demand characteristics account for more variance than training since even successful training effects can be small. We know that environmental factors such as disability compensation account for more variance in real-world outcomes in schizophrenia than ability (Rosenheck et al., 2006); it would be no surprise if demand characteristics account for more variance than ability as well.

Thus, while these results generate the reasonable suggestion that participation through the Internet in cognitive remediation does not guarantee improved cognitive performance, the current research design does not address many important issues regarding cognitive enhancement in clinical populations.

References:

Goldberg TE, Goldman RS, Burdick KE, Malhotra AK, Lencz T, Patel RC, Woerner MG, Schooler NR, Kane JM, Robinson DG. Cognitive improvement after treatment with second-generation antipsychotic medications in first-episode schizophrenia: Is it a practice effect? Arch Gen Psychiatry. 2007 Oct;64:1115-22. Abstract

McGurk SR, Twamley EW, Sitzer DI, McHugo GJ, Mueser KT. A meta-analysis of cognitive remediation in schizophrenia. Am J Psychiatry. 2007;164:1791-1802. Abstract

Rosenheck R, Leslie D, Keefe R, McEvoy J, Swartz M, Perkins D, Stroup S, Hsiao JK, Lieberman J, CATIE Study Investigators Group. Barriers to employment for people with schizophrenia. Am J Psychiatry. 2006;163:411-7. Abstract

View all comments by Philip Harvey

Related News: Brain Training Falls Short in Big Online Experiment

Comment by:  Terry Goldberg
Submitted 7 May 2010
Posted 7 May 2010

This important paper by Owen and colleagues reads like a cautionary tale. In a Web-based study of over 11,000 presumptively healthy individuals, neither of two different types of cognitive training resulted in transfer of improvement to a reasoning task or to several well-validated cognitive tasks from the Cambridge Neuropsychological Test Automated Battery (CANTAB). I would like to point out three issues with the study.

First, the amount of training that individuals received at their own behest differed greatly. While the authors found no correlation between the number of training sessions and performance improvement or lack thereof, it is nevertheless possible that there is some critical threshold, either in number of sessions or, more importantly, time spent in sessions (not noted in the paper), that must be reached before transfer can occur. In other words, the relationship between training and transfer may be nonlinear and perhaps sigmoidal.

Second, it is possible that scores on some of the benchmark/transfer tasks were close to ceiling in this normal population, preventing gain. Perhaps more likely, they could have been close to floor (see Figure 1 in the paper; scores were seemingly quite low), making them insensitive to gain.

Last, as pointed out by Phil Harvey, the nature of the recruitment tool, a debunking TV show called Bang Goes the Theory, may have introduced a bias to disconfirm in the participants. This would be especially pertinent if participants understood the design of the study, which seems likely.

View all comments by Terry Goldberg

Related News: Brain Training Falls Short in Big Online Experiment

Comment by:  Angus MacDonald, SRF Advisor
Submitted 11 May 2010
Posted 11 May 2010

Owen and colleagues are to be commended for drawing attention to the great constraint of cognitive training—that is, the potential for improvements on only the restricted set of abilities that were trained.

This has been the bugbear of cognitive training for a long time. Short story with a purpose: In 2001, when I raved about the remarkable results of Klingberg (later published as Olesen et al., 2004) to John Anderson, an esteemed cognitive psychologist at Carnegie Mellon University, he scoffed at the possibility that Klingberg's training might have led to improvements on Raven's Matrices, a measure of generalized intelligence. "People have been looking into this for a century. If working memory training improved intelligence, schools would be filled with memory training courses rather than math and language courses," he said (or something to that effect). This issue of training and generalization is not new, and the results of Owen and colleagues are consistent with a large body of twentieth-century research.

Owen, therefore, reminds us of an important issue in the current generation of excitement about neuroplasticity: behavioral effects are likely to be small for distal generalization. The possibility of striking results is likely going to require something well beyond what is encountered in everyday or casual experience.

One way to improve on casual experience is dynamic titration. It is reasonably well established that when faced with a task of fixed difficulty, people will begin to asymptote on accuracy and then get faster and faster, with no hope of generalization. The online methods are mum about how this concern was addressed. (One certainly hopes that it was.)

We have recently demonstrated that dynamic titration on an n-back task, in the context of a broader working memory training, can provide local generalization (Haut et. al, 2010). In that study, we examined changes in prefrontal cortical activity with cognitive training compared to an active placebo control in schizophrenia patients. We found that training had provided a stimulus-general improvement in the trained task, and that this improvement mapped onto greater frontopolar and dorsolateral prefrontal cortex activity. This result was therefore quite similar to that reported in healthy adults by Olesen (Olesen et al., 2004). I don't think I'll press for working memory training courses in my local school yet, but Owen won't be the reason.

References:

Olesen PJ, Westerberg H, Klingberg T. Increased prefrontal and parietal activity after training of working memory. Nat Neurosci. 2004;7(1):75-9. Epub 2003 Dec 14. Abstract

Haut KM, Lim KO, MacDonald AW III. Prefrontal cortical changes following cognitive training in patients with chronic schizophrenia: effects of practice, generalization and specificity. Neuropsychopharmacology. 2010 Apr 28. Abstract

View all comments by Angus MacDonald

Related News: Back to Reality: Computerized Cognitive Training Lends a Hand to Schizophrenia

Comment by:  Philip Harvey
Submitted 27 February 2012
Posted 27 February 2012

This is a fabulous study for several reasons.

The authors use cognitive training to enhance cognition and measure both cognitive functioning and the intactness of a neural network that they previous discovered. They find that they enhance cognition and improve regional brain activation. Further, social functioning is improved at a six-month follow-up.

Since the authors measured cognition and functionally relevant outcomes, these data again provide support for the usefulness of cognitive remediation for cognition and functioning, as well as show that these interventions directly impact critical neural networks. The integration of brain, cognition, and functioning makes a strong argument for the universal application of cognitive remediation in people with schizophrenia.

View all comments by Philip Harvey

Related News: Back to Reality: Computerized Cognitive Training Lends a Hand to Schizophrenia

Comment by:  James Gold, SRF Advisor
Submitted 27 February 2012
Posted 27 February 2012

Dr. Subramaniam and colleagues deserve congratulations on an impressive study demonstrating that an extensive computerized cognitive training intervention appears to have effects on both brain physiology and performance on an untrained reality monitoring task—important evidence that the training does not only “teach to the test.” Further, the training appears to normalize the relationship between medial frontal activity and reality monitoring in patients, suggesting that the training has resulted in a reorganization of how patients are able to mobilize neural systems to meet the cognitive challenge. This result is important and adds to the evidence that the Posit Science approach may be valuable for people with schizophrenia. Adding to the importance of this result is the fact that, to date, there is no compelling, replicated evidence that any available pharmacological approach provides effective treatment for the cognitive impairments of schizophrenia. Indeed, given the apparent exodus of multiple major pharmaceutical companies from psychiatric treatment development research, the need to better understand the utility of psychosocial interventions is increasing as we search for answers for today’s patients and likely tomorrow’s as well.

No study is without problems in interpretation, and some arise here. For example, is it possible that the larger effect on the recognition of self-generated items versus externally presented items might be the result of potential ceiling effects in the externally presented condition? In addition, the interpretation of the social functioning results might have been clearer if change in BOLD signal had been correlated with change in social functioning. The current result: no overall change in social function, but correlation between BOLD signal at the end of training and social functioning level at six months. This makes it hard to interpret training-related improvement in medial prefrontal activity as causing the changes in social functioning, basically missing the link that goes directly from treatment condition to outcome.

Lastly, it remains unclear exactly how the Posit Science program works at a neural level. Clearly the approach was designed to target lower-level sensory-perceptual processes with the goal of increasing the fidelity and precision of the inputs to higher-order systems. It would certainly make sense that the gradual training approach, which titrates difficulty level, does indeed work this way. Other non-incompatible possibilities deserve to be considered. Might the dense training approach serve as a kind of attentional and strategy training just as much as perceptual training? That is, patients learn how to get things “right.” One could imagine that this could occur because specific neural populations involved in task performance become more efficient, their task-relevant “receptive fields” become more precise. Alternatively, it is also be possible that higher-order attentional functions are also becoming trained to more efficiently modulate lower-level systems. And it might be easier to understand the type of generalization effects seen in this study if the training inadvertently also had benefits to task general cognitive systems that may be brought to bear on many untrained tasks. It remains for future work to see which understanding is more accurate.

In the meantime, the contribution of the Vinogradov team deserves recognition for an ambitious study, a potentially important and intriguing finding, and providing a ray of hope in an otherwise pretty dark treatment development landscape.

View all comments by James Gold

Related News: Back to Reality: Computerized Cognitive Training Lends a Hand to Schizophrenia

Comment by:  Robert McCarley
Submitted 7 March 2012
Posted 8 March 2012
  I recommend the Primary Papers

Very exciting and hopeful data, especially in this patient population who had been ill nearly 20 years. These data argue strongly for similar trials at other sites.

From a scientific point of view, it will be interesting to see if these functional changes will be accompanied by structural alterations of increased MRI gray matter, compatible with plasticity and increased dendritic and synaptic elements.

View all comments by Robert McCarley