Schizophrenia Research Forum - A Catalyst for Creative Thinking

Training Study Questions Fixed Nature of Fluid Intelligence

7 May 2008. For people who struggle with cognitive problems related to schizophrenia or who simply want to hone their ability to think on the fly, a new study may provide a glimmer of hope. In yesterday’s PNAS early edition, available online, Susanne M. Jaeggi of the University of Michigan at Ann Arbor and her colleagues report that the ability to adapt to new situations, to reason, and to solve problems may not be set in stone after all. The researchers contend that this kind of ability, known as fluid intelligence or Gf, can improve through training on a working memory task. Although the study examined healthy subjects, its findings may someday inform the design of cognitive remediation programs to help people with schizophrenia function better at work and school.

Interest in cognitive training reflects, in part, disappointment in the power of existing drugs to normalize cognition in schizophrenia (see SRF forum discussion). Despite clinical trials showing that atypical antipsychotics may produce modest cognitive improvement, repeated test-taking alone can enhance subjects’ performance on tests of mental functioning (Szöke et al., 2008), and controlling for practice effects may wipe out many of the supposed cognitive benefits of these medications (see SRF related news story). Other trials suggest that cognitive rehabilitation improves the test performance of people with schizophrenia and their real-world functioning as reflected, for instance, in work outcomes (McGurk et al., 2007; Lindenmayer et al., 2008). Yet practice can confound these trials, too.

A problem to solve
Turning specifically to fluid intelligence, Jaeggi and collaborators Martin Buschkuehl, John Jonides, and Walter J. Perrig note that practice can improve scores on Gf tests; “however, it has been demonstrated that practice on these tests decreases their novelty and with that the underlying Gf-processes ([te Nijenhuis et al., 2007 Intelligence 35:283–300]) so that the predictive value of the tests for other tasks disappears.” In other words, the so-called gains may not transfer to other situations. To determine whether cognitive training would improve fluid intelligence, the researchers sought “a task that shares many of the features and processes of Gf tasks, but that is still different enough from the Gf tasks themselves to avoid mere practice effects.”

The notable correlations between working memory and fluid intelligence have prompted sparring over the extent and meaning of the overlap between them (see, for example, Ackerman et al., 2005; Oberauer et al., 2005; Kane et al., 2005). According to Jaeggi and associates, both rely on the ability to direct attention; they also share capacity limits, as shown by the number of items held in working memory or the number of connections made in a reasoning task. In addition, they apparently involve similar neural pathways in the prefrontal and parietal cortices (see SRF related news story). Given these similarities, the researchers thought that the benefits of training to enhance working memory might transfer to fluid intelligence.

To test this notion, the researchers recruited 70 healthy, young adults, half of whom received working memory training in four different scenarios that varied according to whether training occurred on eight, 12, 17, or 19 days. The remaining subjects comprised the matched control groups. Subjects in the training groups received alternate forms of the same Gf test before and after the training; the control groups underwent testing at the same intervals. Gf measures consisted of either the short form of the Bochumer Matrizen-Test or, for the eight-session trial, Raven’s Advanced Progressive Matrices.

The memory task required subjects to monitor two series of simultaneously presented stimuli—specifically, single consonants played over headphones and marks appearing at various spots on a computer screen. As in other n-back tasks, subjects then had to indicate whether the stimulus matched one that was presented a certain number, or n, of trials ago. In this paradigm, n increased by one if they performed well or decreased by one if they did poorly. The authors write that changing the n and using two different kinds of stimuli “discouraged the development of task-specific strategies and the engagement of automatic processes.”

High-performing transfer students
Along with the expected memory gains, the researchers report “the striking result of a training-related gain in Gf,” with subjects who received the intervention showing “dramatic improvement.” The control groups made significant gains, too, presumably from repeated test taking, but the trained groups improved more. In fact, those who trained more gained more, as shown by a dose-response relationship between the amount of training received and the benefits gained.

To explain how the working memory task might foster fluid intelligence, Jaeggi and colleagues suggest that it engages many of the same processes. For instance, they write, “One reason for having obtained transfer between working memory and measures of Gf is that our training procedure may have facilitated the ability to control attention,” a skill that aids adaptive thinking. In addition, both working memory and Gf involve executive functions, such as those involved in multi-tasking, and the ability to relate one item to another.

On the other hand, the similarities do not tell the whole story. According to the researchers, “These data indicate that the transfer effect on Gf scores goes beyond an increase in working memory capacity alone.” They based that conclusion on analyses that controlled for working memory, as measured by subjects’ performance on digit-span and reading-span tasks.

Despite the lack of a direct link to schizophrenia, these findings may lead to some adaptive thinking about the cognitive problems that comprise some of its most disruptive symptoms. As Jaeggi and her coauthors write, “Instead of regarding Gf as an immutable trait, our data provide evidence that, with appropriate training, there is potential to improve Gf.” Whether their study and those to come will spur a rethinking about how fluid this kind of intelligence might be in schizophrenia, and how to design rehabilitation programs that address deficits, remains to be seen.—Victoria L. Wilcox.

Reference:
Jaeggi SM, Buschkuehl M, Jonides J, Perrig WJ. Improving fluid intelligence with training on working memory. PNAS early edition. 2008 April 28. Abstract

Comments on News and Primary Papers
Comment by:  Andrei Szoke
Submitted 7 May 2008
Posted 7 May 2008

The authors suggest that they have found what could be considered the Holy Grail of cognitive research—a means to enhance intelligence. There is some hope from the article, as results on a task considered to measure fluid intelligence are improved, even if the subjects are not trained on this specific task. The “dual n-back” training task, although not pure working memory (as the authors acknowledge), is a very interesting experimental paradigm. Unfortunately, the authors fail to convince us of its usefulness in enhancing “fluid intelligence.” When a drug is tested, any effect, to be convincingly supported, must be demonstrated in a double-blind, randomized, placebo (or standard treatment)-controlled trial. The same should be true for any (pharmacological or otherwise) means aimed at enhancing cognition.

As for the issue of whether this training will have the same effects in schizophrenic subjects as it had in these normal, motivated controls, that is an entirely different question that is not addressed in the article. I think that future studies have to address all those limitations (randomization of subjects, a similar amount of training with a different task in controls, a double-blind design) before any firm conclusions could be drawn.

View all comments by Andrei Szoke

Comments on Related News


Related News: Relational Memory Deficits Traced to Parietal Cortex/Hippocampus

Comment by:  Deborah Levy
Submitted 19 May 2006
Posted 19 May 2006

Comment by Deborah Levy, Debra Titone, and Howard Eichenbaum.
It is easy to appreciate why relational memory organization is such a compelling topic in studies of psychotic conditions. Relational memory allows one to flexibly manipulate information to discern new relationships based on known facts. The memory representations that support implicit reasoning of this type emerge effortlessly when the medial temporal lobe functions normally, whether navigating from a detour in a usual route or extrapolating that which is common across a set of individual memory traces. Relational thinking gone awry is a fundamental component of psychotic thinking. Inferential reasoning, referential ideas, and delusional extrapolations all involve making connections between unrelated things. These unwarranted connections, in turn, lead to erroneous (and potentially unrealistic) conclusions.

The kind of relational memory studied by Ongur et al. (2006) involves transitive inference (TI), or the capacity to use knowledge of how individual memory traces overlap, to correctly infer a new relationship that is predicated on already stored information. For example, it is straightforward to make the TI that Sally is taller than Peter if one knows the two related premises, Sally is taller than Frank and Frank is taller than Peter.

The study by Ongur et al. follows up on two studies initiated by Titone’s translation of Eichenbaum’s paradigm for testing TI in rodents. The first study established that schizophrenic patients show TI deficits compared with controls (Titone et al., 2004). Using a modification of the same TI paradigm adapted for use in the imaging environment, the second study assessed which brain regions were selectively activated when nonpsychiatric controls made TI relational judgments (relative to non-TI relational judgments). Of particular interest was whether the hippocampus (HP) would be one of the regions to show selective activation, since Eichenbaum’s work in rodents had demonstrated that animals with HP lesions or whose HPs have been disconnected from their subcortical and cortical connections lose the capacity for TI (Dusek and Eichenbaum, 1997; 1998). The imaging study showed a selective association between TI and activation of right HP, pre-supplementary motor area, left prefrontal cortex, left parietal cortex, and thalamus (Heckers et al., 2004; see also Preston et al., 2004). Having established the neural circuitry subserving TI in the healthy brain, the stage was set to compare the patterns of regional brain activation in schizophrenic and control subjects, the focus of the study by Ongur et al.

In addition to shedding some light on a potential relational memory impairment in schizophrenia, the Ongur et al. paper is useful for illustrating some of the complexities that arise in designing and interpreting the results of imaging studies generally and of transitive inference studies in particular. Below we discuss several of them and the many intriguing questions that call for resolution in future work.

One main source of complexity is that the key condition that demonstrates the capacity for TI is also the most difficult, especially for schizophrenic patients. Ongur et al. report that in controls, accuracy does not differ between the BD and non-BD sequential inference trials. However, the mean accuracy score of the controls is in the direction of the BD condition being less accurate. In addition, had response latencies been reported for those two conditions as well, it is very likely that correct response latencies for BD would have been longer than those for non-BD sequential inference trials in both groups. Thus, with these particular stimuli it is not straightforward to distinguish discrete effects of TI from the increased difficulty of TI judgments relative to non-TI judgments on performance or on regional brain activation. In other words, did schizophrenics perform more poorly on BD than controls because BD is more difficult than non-BD or because their capacity for TI is compromised? Was the increased activation bilaterally of inferior parietal cortex during BD (relative to non-BD) in controls a function of TI or of task difficulty? Was the increased activation of right inferior parietal cortex in schizophrenics during BD (relative to non-BD) a function of TI or of task difficulty? (See Stark and Squire, 2003.)

Schizophrenic patients performed the critical TI task (BD vs. non-BD) significantly worse than controls, which is to be expected given the previously mentioned increased task difficulty of the BD condition. Indeed, as a group they performed no better than chance. The groups also differed in pattern of regional brain activation. In the whole brain analysis, controls activated inferior parietal bilaterally, whereas schizophrenics activated right inferior parietal, inferior frontal, and premotor cortices. Neither group significantly activated HP. The only region to show a significant activation difference between the groups was right inferior parietal cortex. Because performance and activation are confounded, the pattern of results does not lend itself to a simple interpretation. That is, did these differences in regional activation occur because schizophrenics could not perform a task that depends on these regions, or did the poor performance occur because schizophrenics could not activate critical regions? The same confound affects the ability to interpret the results of the ROI analysis of HP. Was HP activation decreased during BD in schizophrenics because they were not performing a task that depends on HP, or was performance poor because patients were not able to activate HP?

Several aspects of these results make it difficult to characterize the role of the HP in the neural circuitry subserving TI in humans. First, in the whole brain analysis, controls activated HP only in the analysis of a general comparison of “TI versus non-TI” conditions. The specific critical TI comparison condition (BD vs. non-BD) did not activate HP in controls in the whole brain analysis, a finding that would have been expected based on Eichenbaum and colleagues’ rodent work (Dusek and Eichenbaum, 1997; 1998). Second, was the decreased activation of HP during BD in the patients related to excessively high basal levels of activation? Third, the one region that did show a difference between BD and non-BD in controls and that distinguished schizophrenics from controls was right inferior parietal cortex, not HP. Based on the results of the whole brain analysis, it would be interesting to see the results of an ROI analysis of inferior parietal, inferior frontal, premotor cortex, and anterior cingulate. Fourth, to what extent do sensitivity and power contribute to the difference between the results of the whole brain and ROI analyses and between the more general TI versus non-TI contrast and the specific BD versus non-BD contrast? Fifth, was the increased activation of inferior frontal regions during BD in schizophrenics an effort to compensate for under-recruitment of inferior parietal cortex bilaterally or HP (see Bonner-Jackson et al., 2005)?

Although the results of the Ongur et al. (2006) study are promising, to characterize the neural basis of relational memory deficits in schizophrenia, at least two additional challenges must be met. The first is to differentiate TI from task difficulty. Our recent modification of the TI paradigm that was used in the Titone et al. (2004), Heckers et al (2004), and Ongur et al. (2006) studies disambiguates TI deficits from difficulty effects. The preliminary results are quite promising in showing that schizophrenics show behavioral deficits in TI independent of difficulty. The second is to unconfound behavioral performance and neural activation. One way to do this is to match the comparison groups on behavioral performance. Another is to separately analyze imaging data from individuals with schizophrenia who can do TI at greater than chance levels and those who cannot.

References:

Bonner-Jackson A, Haut K, Csernansky JG, Barch DM. The influence of encoding strategy on episodic memory and cortical activity in schizophrenia. Biol Psychiatry. 2005 Jul 1;58(1):47-55. Abstract

Dusek JA, Eichenbaum H. The hippocampus and memory for orderly stimulus relations. Proc Natl Acad Sci U S A. 1997 Jun 24;94(13):7109-14. Abstract

Dusek JA, Eichenbaum H. The hippocampus and transverse patterning guided by olfactory cues. Behav Neurosci. 1998 Aug;112(4):762-71. Abstract

Heckers S, Zalesak M, Weiss AP, Ditman T, Titone D. Hippocampal activation during transitive inference in humans. Hippocampus. 2004;14(2):153-62. Abstract

Ongur D, Cullen TJ, Wolf DH, Rohan M, Barreira P, Zalesak M, Heckers S. The neural basis of relational memory deficits in schizophrenia. Arch Gen Psychiatry. 2006 Apr;63(4):356-65. Abstract

Preston AR, Shrager Y, Dudukovic NM, Gabrieli JD. Hippocampal contribution to the novel use of relational information in declarative memory. Hippocampus. 2004;14(2):148-52. No abstract available. Abstract

Stark CE, Squire LR. Hippocampal damage equally impairs memory for single items and memory for conjunctions. Hippocampus. 2003;13(2):281-92. Abstract

Titone D, Ditman T, Holzman PS, Eichenbaum H, Levy DL. Transitive inference in schizophrenia: impairments in relational memory organization. Schizophr Res. 2004 Jun 1;68(2-3):235-47. Abstract

View all comments by Deborah Levy

Related News: Relational Memory Deficits Traced to Parietal Cortex/Hippocampus

Comment by:  Patricia Estani
Submitted 3 June 2006
Posted 3 June 2006
  I recommend the Primary Papers

Related News: Relational Memory Deficits Traced to Parietal Cortex/Hippocampus

Comment by:  Terry Goldberg
Submitted 19 June 2006
Posted 19 June 2006

Ongur, Heckers, and colleagues present an interesting set of findings about memory in schizophrenia. Using a transitive inference paradigm to explore relational memory (inferring that A>C if one knows A>B and B>C), they showed both a selective behavioral deficit for one particular type of transitive inference (“BD”) that can only be done through logic and not through reinforcement alone and abnormalities in BOLD activation in parietal cortex, hippocampus, and anterior cingulate in schizophrenia. The study is exciting because it pinpoints a relatively specific mnemonic processing abnormality, a task not as easy as it may appear. Our own behavioral work (Goldberg, Elvevaag, and colleagues) has emphasized quantitative but not qualitative behavioral memory processing impairments in paradigms that included levels of encoding, false memory, and AB-ABr interference. A computational model of this work seemed to demonstrate marked reductions in connectivity (but not “neuronal number” or “noise”) in inputs into “entorhinal cortex” and from entorhinal to hippocampus fit the data well. Given the Heckers findings, it will be interesting to see how the model handles transitive inference.

As in every study, no matter the degree of excellence, there are issues. The study pivots on the presentation of eight BD pairings. Is this really enough? There have been consistent murmurings in the field that the transitive logical computations themselves may occur in prefrontal cortex. While the pre-SMA activation may technically fit the bill, one wonders if a region oft thought to be responsible for simple chaining of motor sequences is really up to the task of determining transitive relations. Perhaps most interesting is the finding of parietal cortical hypoactivation in schizophrenia during transitive inference, but is it specific to engagement in a transitivity judgment, or is it more generally a visual discrimination processing abnormality?

View all comments by Terry Goldberg

Related News: Antipsychotics and Cognition: Practice Makes Perfect Confounder

Comment by:  Richard Keefe
Submitted 12 October 2007
Posted 12 October 2007

As stated in the CATIE and CAFÉ neurocognition manuscripts, it is possible that the small improvements in neurocognitive performance following randomization to one of the antipsychotic treatments in these studies are due solely to practice effects or expectation biases. This statement is affirmed by the excellent recent study by Goldberg et al. in which improvements in cognitive performance were almost identical in magnitude to the practice effects found in healthy controls. While these data may be perhaps disappointing to the hope that second-generation medications improve cognition, they may also suggest that cognitive performance is less recalcitrant to change than previously expected.

In the context of a double-blind study design, the degree of cognitive enhancement observed for each treatment group is a function of three major variables: treatment effect, placebo effect, and practice effect. In studies of antipsychotic medications without a placebo control group, practice and placebo effects in schizophrenia cannot be disentangled from treatment effects. They also cannot be disentangled from each other. Recent data from a double-blind study comparing the effects of donepezil hydrochloride and placebo in a highly refined sample of 226 patients with schizophrenia stabilized while taking second-generation antipsychotics suggested that patients taking placebo had neurocognitive effect size improvements (0.22 SD after being tested twice over 6 weeks; 0.45 SD after the third assessment at 12 weeks) on the same test battery used in the CATIE and CAFÉ studies, suggesting a practice or placebo effect (Keefe et al., Neuropsychopharmacology, in press) consistent with the improvements reported in the CATIE and CAFÉ treatment studies. These cognitive improvements are in contrast to test-retest data collected in patients with schizophrenia tested with the MATRICS Consensus Cognitive Battery (MCCB; Nuechterlein et al., in press) and the Brief Assessment of Cognition in Schizophrenia (BACS; Keefe et al., 2004), which showed very little practice effects. The contrast of the data from these test-retest studies that did not involve the initiation of new treatments with cognitive improvements following the initiation of antipsychotic treatment or placebo suggests that attribution biases beyond simple practice effects may be at work.

Test-retest data from patients tested twice within a briefer period than the test interval in the four treatment studies discussed above suggest that schizophrenia patients demonstrate relatively small improvements in executive functions (Keefe et al., 2004; Nuechterlein et al., in press) and the WAIS digit-symbol test (Nuechterlein et al., in press), and medium improvements on tests of verbal memory only when identical versions are repeated (Hawkins and Wexler, 1999; Keefe et al., 2004) but not on tests of verbal fluency (Keefe et al., 2004; Nuechterlein et al., in press). In the donepezil/placebo study, patients who received placebo improved substantially across several cognitive domains. Although not tested directly, this series of results suggests that the magnitude of placebo effects in cognitive enhancement trials may exceed the reported size of practice-related improvements in studies of schizophrenia patients tested twice without the prospect of the initiation of a cognitive intervention.

The greater improvements in cognition found in the context of a placebo-controlled trial could be due to a variety of psychological factors. When a patient enters into a trial or is treated with a medication that is believed to contribute beneficially to cognitive performance, rater bias and expectation bias can have strong effects on performance. Patients who are told that their cognitive abilities might improve may be able to perform better on the test batteries used in the study simply because their expectations become more optimistic. Second, testers who believe that a patient will have cognitive improvement, or hope for such improvement, could administer the tests in a more hopeful, positive manner, which can help the patient raise his or her expectations for performance and thus engage motivational systems that were previously disengaged (Keefe, 2006). Such expectation bias can also lead to inaccuracies in scoring; since many cognitive tests require the use of judgment to determine final scores, hopeful testers are more likely to give the “benefit of the doubt” to patients after they have entered into a study in which the treatment is potentially cognitively enhancing. Third, this same type of expectation could have an impact on the support that a patient receives in his or her community/living situation. If the people who interact regularly with the patient begin looking for better performance on cognitively related tasks, these expectations could become self-fulfilling in that they may raise the confidence and motivation of the patient to perform well on such tasks, including cognitive testing.

The factors associated with improvement during a placebo-controlled trial are indeed complex, and it is difficult to distinguish practice effects from placebo effects. However, the relatively small clinical improvement in test-retest designs without treatment or placebo intervention suggests that any potential practice effects may at least be potentiated by placebo effects.

The implications for this series of results include a methodological caution and a reason for optimism. Regarding the caution, future trials of cognitive-enhancing compounds might need to be designed in such a way that practice and placebo are reduced. Very few treatment studies of patients with schizophrenia have employed a priori methodological strategies to reduce the magnitude of potential practice effects, such as the use of a placebo run-in period with one or more administrations of the cognitive battery prior to randomization. Regarding the optimism, these studies suggest that schizophrenia cognition (perhaps especially when freed from the dampening effects of large doses of high potency medications such as haloperidol) could be more plastic that had been previously assumed; it is possibly as sensitive to experience-dependent learning in schizophrenia patients as healthy controls, and it may benefit from improved psychological expectations. While this is a methodological nuisance for clinical trial designs, it may also reveal an unexpectedly large potential gain for psychological interventions such as cognitive remediation, cognitive-behavioral therapy, and even encouragement.

References:

Goldberg TE, Goldman RS, Burdick KE, Malhotra AK, Lencz T, Patel RC, Woerner MG, Schooler NR, Kane JM, Robinson DG. Cognitive improvement after treatment with second-generation antipsychotic medications in first-episode schizophrenia: Is it a practice effect? Arch Gen Psychiatry. 2007 Oct;64:1115-1122. Abstract

Hawkins KA, Wexler BE (1999). California Verbal Learning Test practice effects in a schizophrenia sample. Schizophr Res 39: 73-78. Abstract

Keefe RSE. Missing the sweet spot: Disengagement in schizophrenia. Psychiatry, 2006; 3: 36-41.

Keefe RSE, Malhotra AK, Meltzer H, Kane JM, Buchanan RW, Murthy A, Sovel M, Li, C, Goldman R. Efficacy and safety of donepezil in patients with schizophrenia or schizoaffective disorder: Significant placebo/practice effects in a 12-week, randomized, double-blind, placebo-controlled trial. Neuropsychopharmacology, 2007 [Epub ahead of print]. Abstract

Keefe RSE¸ Goldberg TE, Harvey PD, Gold JM, Poe M, Coughenour L. The Brief Assessment of Cognition in Schizophrenia: Reliability, sensitivity, and comparison with a standard neurocognitive battery. Schizophrenia Research, 2004; 68: 283-297. Abstract

Nuechterlein KH, Green MF, Kern RS, Baade LE, Barch D, Cohen J, Essock S, Fenton WS, Frese FJ, Gold JM, Goldberg T, Heaton R, Keefe RSE, Kraemer H, Mesholam-Gately R, Seidman LJ, Stover E, Weinberger D, Young AS, Zalcman S, Marder SR. The MATRICS consensus cognitive battery: Part 1. Test selection, reliability, and validity. The American Journal of Psychiatry (in press).

View all comments by Richard Keefe

Related News: Antipsychotics and Cognition: Practice Makes Perfect Confounder

Comment by:  Narsimha Pinninti (Disclosure)
Submitted 15 October 2007
Posted 15 October 2007
  I recommend the Primary Papers

This article questions the prevailing notion that antipsychotic medication (particularly second-generation antipsychotics) improve cognitive functioning in individuals with schizophrenia. As the authors rightly note, practice effects should be taken into account before attributing improvements to drug effects.

View all comments by Narsimha Pinninti

Related News: Antipsychotics and Cognition: Practice Makes Perfect Confounder

Comment by:  Saurabh Gupta
Submitted 15 October 2007
Posted 15 October 2007
  I recommend the Primary Papers

I propose that future studies should use computational cognitive assessment tools like CANTAB or CogTest, which have at least two advantages. These tools have multiple similar test modules, so on each testing during one study, participants get a similar but not the same test to assess the same cognitive function. Besides, computational assessment also reduces chances of subjective bias on the part of investigator.

References:

Levaux MN, Potvin S, Sepehry AA, Sablier J, Mendrek A, Stip E. Computerized assessment of cognition in schizophrenia: promises and pitfalls of CANTAB. Eur Psychiatry. 2007 Mar;22(2):104-15. Review. Abstract

View all comments by Saurabh Gupta

Related News: Antipsychotics and Cognition: Practice Makes Perfect Confounder

Comment by:  Sebastian Therman
Submitted 17 October 2007
Posted 17 October 2007

One remedy would be repeated practice over time before the actual baseline, sufficient to reach asymptotic ability. Computerized testing of reaction time measures, short-term memory span, etc. would all be quite cheap and easy to implement, for example, as a weekly session.

View all comments by Sebastian Therman

Related News: Antipsychotics and Cognition: Practice Makes Perfect Confounder

Comment by:  Andrei Szoke
Submitted 1 November 2007
Posted 5 November 2007
  I recommend the Primary Papers

We recently completed a meta-analysis on "Longitudinal studies of cognition in schizophrenia" (to be published in the British Journal of Psychiatry) based on 53 studies providing data for 31 cognitive variables. When enough data were available (19 variables from eight cognitive tests), we compared the results of schizophrenic participants to those of normal controls.

Given the differences in methods and the fact that most of the studies included in our meta-analysis reported results of patients being past their first episode (FE), it is surprising how close our results and conclusions are compared to those of Goldberg et al. In our analysis we found that, with two exceptions (semantic verbal fluency and Boston naming test, which were stable), participants with schizophrenia improved their performances. The improvement was statistically significant for 19 variables (out of 29). However, controls also showed improvement in most of the variables due to the practice effect. A significant improvement (definite practice effect) was present for 10 variables, an improvement that did not reach significance (possible practice effect) was present in six more variables, and three variables showed no improvement. When compared with schizophrenic patients, controls showed similar improvement for 11 variables, significantly more improvement for seven variables (six of them from the “definite practice effect” group, one from the “possible practice effect”) and for one variable less improvement (the Stroop interference score). Thus, these results suggest that for most of the cognitive variables, improvement seen in schizophrenic subjects does not exceed improvement due to the practice effect.

It is interesting to mention that in our analysis only two variables improved significantly more when patients had a change in their medication from first-generation antipsychotics (FGAs) to second-generation antipsychotics (SGAs). These variables were time to complete TMT B and the delayed recall of the Visual Reproduction (from the WMS). In the Goldberg et al. study the only two tests that showed more improvement in schizophrenic subjects than in controls were also the TMT and visual reproduction. Although in our study schizophrenic subjects did not improve more than controls, the two results (Goldberg’s and ours) taken together could be an indirect argument for a differential, specific effect of SGAs on those two (visuo-spatial) tasks. The placebo effect—see the comment by Richard Keefe—could explain why improvement in the study by Goldberg et al. was greater than in our meta-analysis. Studies of effects of changing medication in the opposite direction, from SGAs to FGAs, could contribute to validate or invalidate these hypotheses.

Goldberg et al. suggested that there could be a set of task characteristics that could be used to develop tasks resistant to the practice effect. Our own results are less optimistic as they show that phonemic verbal fluency, despite a very similar format, does not share the “practice resistance” with the semantic verbal fluency. However, we think that there is already a wealth of data that could be used to select the best cognitive tests. An alternative solution is the use of scales and questionnaires for evaluating cognition (that are sensible to the placebo effect but not to the practice effect).

References:

Szoke A, Trandafir A, Dupont M-E, Meary A, Schurhoff F, Leboyer M. Longitudinal studies of cognition in schizophrenia. British Journal of Psychiatry (in press).

View all comments by Andrei Szoke

Related News: Antipsychotics and Cognition: Practice Makes Perfect Confounder

Comment by:  Patricia Estani
Submitted 7 November 2007
Posted 8 November 2007
  I recommend the Primary Papers

Related News: Cognition and Dopamine—D1 Receptors a Damper on Working Memory?

Comment by:  Michael J. Frank
Submitted 19 February 2009
Posted 19 February 2009

McNab and colleagues provide groundbreaking evidence showing that cognitive training with working memory tasks over a five-week period impacts D1 dopamine receptor availability in prefrontal cortex. Links between prefrontal D1 receptor function and working memory are often thought to be one-directional, i.e., that better D1 function supports better working memory, but here the authors show that working memory practice reciprocally affects D1 receptors.

An influential body of empirical and theoretical research suggests that an optimal level of prefrontal D1 receptor stimulation is required for working memory function (e.g., Seemans and Yang, 2004). Because acute pharmacological targeting of prefrontal D1 receptors reliably alters working memory, causal directionality from D1 to working memory remains evident. Nevertheless, these findings cast several other studies in a new light. Namely, when a population exhibits impaired (or enhanced) working memory and PET studies indicate differences in dopaminergic function, it is no longer clear which variable is the main driving factor. For example, those who engage in cognitively demanding tasks on a day-to-day basis may show better working memory and dopaminergic correlates may be reactive rather than causal. Finally, the possibility cannot be completely discounted that the observed changes in D1 receptor binding may reflect a learned increase in prefrontal dopamine release; this would explain the general tendency for D1 receptor availability to decrease with cognitive training, due to competition with endogenous dopamine.

The McNab study also finds that only cortical D1 receptors, and not subcortical D2 receptors, were altered by cognitive training. The significance of this null effect of D2 receptors is not yet clear. First, all tasks used in the training study involved recalling the ordering of a sequence of stimuli and repeating them back when probed. While clearly taxing working memory, these tasks did not require subjects to attend to some stimuli while ignoring other distracting stimuli, and did not require working memory manipulation. Both manipulation and updating are characteristics of many working memory tasks, particularly those that depend on and/or activate the basal ganglia. Indeed, previous work by the same group (McNab and Klingberg, 2008) showed that basal ganglia activity is predictive of the ability to filter out irrelevant information from working memory. Similarly, Dahlin et al. (2008) reported that training on tasks involving working memory updating leads to generalized enhanced performance in other working memory tasks, and that this transfer of learned knowledge is predicted by striatal activity. These results are consistent with computational models suggesting that the basal ganglia act as a gate to determine when and when not to update prefrontal working memory representations and are highly plastic as a function of reinforcement. Thus, future research is needed to test whether training on filtering, updating, or manipulation tasks leads to changes in striatal D2 receptor function.

References:

McNab, F. and Klingberg, T. (2008). Prefrontal cortex and basal ganglia control access to working memory. Nature Neuroscience, 11, 103-107. Abstract

Dahlin, E., Neely, A.S., Larsson, A., Bäckman, L. & Nyberg, L. (2008). Transfer of learning after updating training mediated by the striatum. Science, 320, 1510-1512. Abstract

Seamans, J.K. and Yang, C.R. (2004). The principal features and mechanisms of dopamine modulation in the prefrontal cortex. Progress in Neurobiology, 74, 1-57. Abstract

View all comments by Michael J. Frank

Related News: Cognition and Dopamine—D1 Receptors a Damper on Working Memory?

Comment by:  Terry Goldberg
Submitted 3 March 2009
Posted 3 March 2009

This is an important article that describes profound changes in the dopamine D1 receptor binding potential after working memory training in healthy male controls. The study rests on prior work that has demonstrated changes in brain volume with practice (e.g., Draganski and May, 2008), and dopamine can be released at the synapse in measurable amounts even during, dare I say, fairly trivial activities (e.g., playing a video game (Koepp et al., 1998). The present study demonstrated that binding potential of D1 receptors decreased in cortical regions (right ventrolateral frontal, right dorsolateral PFC, and posterior cortices) with training, and the magnitude of this decrease correlated with the improvement during training. Binding potential of D2 receptors in the striatum did not change. Unfortunately, D2 receptors in the cortex could not be measured with raclopride.

Two points come to mind. One is theoretical—how long would such a change remain, i.e., is it transient or is it fixed? This has implications for understanding practice-related phenomena and their transfer or consolidation. The second is technical. A number of studies have shown that practice can change not only the magnitude of a physiologic response, but also its location (see Kelly and Garavan for a review, 2005). Thus, the circuitry involved in learning a task may be different than the circuitry involved in implementing a task after it is well learned. By constraining areas to those activated in fMRI during initial working memory engagement, it is possible that other critical areas were not monitored for binding potential changes.

References:

Draganski B, May A. Training-induced structural changes in the adult human brain. Behav Brain Res . 2008 Sep 1 ; 192(1):137-42. Abstract

Kelly AM, Garavan H. Human functional neuroimaging of brain changes associated with practice. Cereb Cortex . 2005 Aug 1 ; 15(8):1089-102. Abstract

Koepp MJ, Gunn RN, Lawrence AD, Cunningham VJ, Dagher A, Jones T, Brooks DJ, Bench CJ, Grasby PM. Evidence for striatal dopamine release during a video game. Nature . 1998 May 21 ; 393(6682):266-8. Abstract

View all comments by Terry Goldberg

Related News: Brain Training Falls Short in Big Online Experiment

Comment by:  Robert Bilder, SRF Advisor (Disclosure)
Submitted 27 April 2010
Posted 27 April 2010

It’s wonderful to see this study in Nature, for it draws international attention to extremely important issues, including the degree to which cognitive training may yield generalizable effects, and to the amazing potential power of Web-based technologies to engage tens of thousands of individuals in behavioral research. It seems likely—and unfortunate—that for much of the world, the “take-home” message will be that all this “brain training” is bunk.

For me, the most exciting aspect of the study is that it was done at all. The basic design (engaging a TV audience to register for online experiments) is ingenious and indicates the awesome potential to use media for “good instead of evil.” Are there any investigators out there who would not be happy to recruit 52,617 research participants (presumably within the course of a single TV season)? Of course, this approach yielded only 11,430 people who completed the protocol (still sounds pretty good to me, especially since this reflects roughly 279,692 sessions completed). For those of us who struggle for years to obtain behavioral test data on several thousand research participants in our laboratories, this is a dream come true. Thus, I see the big contribution here as a validation of high-throughput behavioral phenotyping using the Internet, which will be necessary if we are to realize the promise of neuropsychiatric genetics.

The success of this validation experiment is obvious by looking at the specific effects (not the generalization effect), which showed effect sizes (Cohen’s d) ranging from 0.72 to 1.63. It would be of high value to see considerably more data on the within-subject variability of each training test score and on the covariance of different scores across sessions, not to mention other quality metrics. These include response time consistency, which should be used to assure “on-task” behavior and the validity of each session. Despite the lack of these details, the positive findings of these large effects means the results must be reasonably reliable (i.e., it is not feasible to get large and consistent effects from noise). This is very encouraging for those who want to see more Web-based behavioral phenotyping.

The “negative” results center on the lack of generalization to the “benchmark” tests, and this aspect of the outcome involves many more devils in the details. The authors are sensitive to many possibilities. The argument that there might be a misalignment of training with the benchmark tests is difficult to refute. The authors suggest that their 12 training tasks covered a “broad range of cognitive functions” and “are known to correlate highly with measures of general fluid intelligence or “g,” and were therefore most likely to produce an improvement in the general level of cognitive functioning” [italics added]. But this assertion is not logically sound. It is not unreasonable, but there is no necessary reason to suppose that the best way to improve general cognitive ability is to do so via general intellectual ability practice.

The suitability of the CANTAB (Cambridge Neuropsychological Test Automated Battery) tasks as benchmarks is also not an open-closed case. Yes, they are sensitive to brain damage and disruptive effects of various agents (for a comprehensive bibliography, see www.cantab.com/science/bibliography.asp). The data showing that these tasks can detect improvement in healthy people are more limited. One wonders if the guanfacine and clonidine improvement effects observed in the one study that is cited (Jakala et al., 1999a) would be seen in the sample of people who participated in the new study. Incidentally, it could be noted that the same authors also reported impairments on other cognitive tasks using the same agents (Jakala et al., 1999b; Jakala et al., 1999c). The bottom line is that it may be difficult to see big improvements, particularly in general cognitive ability, in people who are to some extent preselected for their healthy cognitive function and interest in “exercising” their brains.

Overall, I believe more research is needed to determine which aspects of cognitive function may be most trainable, in whom, and under what circumstances. I worry that this publication may end up derailing important studies that can shed light on these issues, since it is much easier to conclude that something is “bunk” than to push the envelope and systematically study all the reasons such a study could generate negative generalization results. Let us hope the baby is not thrown out with the bathwater at this early stage of investigation.

References:

Jäkälä P, Sirviö J, Riekkinen M, Koivisto E, Kejonen K, Vanhanen M, Riekkinen P Jr. Guanfacine and clonidine, alpha 2-agonists, improve paired associates learning, but not delayed matching to sample, in humans. Neuropsychopharmacology. 1999a Feb;20(2):119-30. Abstract

Jäkälä P, Riekkinen M, Sirviö J, Koivisto E, Kejonen K, Vanhanen M, Riekkinen P Jr. Guanfacine, but not clonidine, improves planning and working memory performance in humans. Neuropsychopharmacology. 1999b May;20(5):460-70. Abstract

Jäkälä P, Riekkinen M, Sirviö J, Koivisto E, Riekkinen P Jr. Clonidine, but not guanfacine, impairs choice reaction time performance in young healthy volunteers. Neuropsychopharmacology. 1999c Oct;21(4):495-502. Abstract

View all comments by Robert Bilder

Related News: Brain Training Falls Short in Big Online Experiment

Comment by:  Philip Harvey
Submitted 27 April 2010
Posted 27 April 2010

The paper from Owen et al. reports that a sample of community dwellers recruited to participate in a cognitive remediation study did not improve their cognitive performance except on the tasks on which they trained. While the results of cognitive remediation studies in schizophrenia have been inconsistent, the results of this study are particularly difficult to interpret, for several reasons:

1. Baseline performance on the "benchmarking" assessment does not appear to be adjusted for age, education, and other demographic predictive factors. As a result, we do not know if the participants even had room to improve from baseline. It is possible that the volunteers in this study were very high performers at baseline and could not improve. Furthermore, if they are, in fact, high performers, their performance and the lack of any improvements with treatment may be irrelevant to poor performers.

2. There is no way to know if the research participants who completed the baseline and endpoint assessments were the same ones who completed the training. Without this control, which is provided in studies that directly observe subjects, there may be reason to be suspicious of the results.

3. Although this is a letter to the editor, methodological details are missing. Recent studies that reported successful cognitive remediation interventions have used dynamic titration to adjust task difficulty according to performance at the time of the training. While Owen et al. do not say, it seems unlikely that their study used dynamic titration. The use of this technique is the major difference between older, unsuccessful cognitive remediation interventions and recent, more successful ones delivered to people with schizophrenia (see McGurk et al., 2007 for a discussion).

4. Even more important is the small effect of the training and participation in general on changes in elements of the benchmarking assessment. These changes on half of the tests administered are smaller than those reported in simple retest assessments without cognitive training in people with schizophrenia (see Goldberg et al., 2007). Although the authors argue that these tests are known to be sensitive, this very small effect is particularly salient for paired associates learning. Thus, the issue of whether some of these tests are not sensitive to changes originating from either treatment or practice requires some consideration, particularly since we do not know how well the participants performed at baseline.

5. Most important, these data are likely to reflect substantial demand characteristics. A study put on by a show called Bang Goes The Theory certainly appears to pull for disconfirmation. It is possible that demand characteristics account for more variance than training since even successful training effects can be small. We know that environmental factors such as disability compensation account for more variance in real-world outcomes in schizophrenia than ability (Rosenheck et al., 2006); it would be no surprise if demand characteristics account for more variance than ability as well.

Thus, while these results generate the reasonable suggestion that participation through the Internet in cognitive remediation does not guarantee improved cognitive performance, the current research design does not address many important issues regarding cognitive enhancement in clinical populations.

References:

Goldberg TE, Goldman RS, Burdick KE, Malhotra AK, Lencz T, Patel RC, Woerner MG, Schooler NR, Kane JM, Robinson DG. Cognitive improvement after treatment with second-generation antipsychotic medications in first-episode schizophrenia: Is it a practice effect? Arch Gen Psychiatry. 2007 Oct;64:1115-22. Abstract

McGurk SR, Twamley EW, Sitzer DI, McHugo GJ, Mueser KT. A meta-analysis of cognitive remediation in schizophrenia. Am J Psychiatry. 2007;164:1791-1802. Abstract

Rosenheck R, Leslie D, Keefe R, McEvoy J, Swartz M, Perkins D, Stroup S, Hsiao JK, Lieberman J, CATIE Study Investigators Group. Barriers to employment for people with schizophrenia. Am J Psychiatry. 2006;163:411-7. Abstract

View all comments by Philip Harvey

Related News: Brain Training Falls Short in Big Online Experiment

Comment by:  Terry Goldberg
Submitted 7 May 2010
Posted 7 May 2010

This important paper by Owen and colleagues reads like a cautionary tale. In a Web-based study of over 11,000 presumptively healthy individuals, neither of two different types of cognitive training resulted in transfer of improvement to a reasoning task or to several well-validated cognitive tasks from the Cambridge Neuropsychological Test Automated Battery (CANTAB). I would like to point out three issues with the study.

First, the amount of training that individuals received at their own behest differed greatly. While the authors found no correlation between the number of training sessions and performance improvement or lack thereof, it is nevertheless possible that there is some critical threshold, either in number of sessions or, more importantly, time spent in sessions (not noted in the paper), that must be reached before transfer can occur. In other words, the relationship between training and transfer may be nonlinear and perhaps sigmoidal.

Second, it is possible that scores on some of the benchmark/transfer tasks were close to ceiling in this normal population, preventing gain. Perhaps more likely, they could have been close to floor (see Figure 1 in the paper; scores were seemingly quite low), making them insensitive to gain.

Last, as pointed out by Phil Harvey, the nature of the recruitment tool, a debunking TV show called Bang Goes the Theory, may have introduced a bias to disconfirm in the participants. This would be especially pertinent if participants understood the design of the study, which seems likely.

View all comments by Terry Goldberg

Related News: Brain Training Falls Short in Big Online Experiment

Comment by:  Angus MacDonald, SRF Advisor
Submitted 11 May 2010
Posted 11 May 2010

Owen and colleagues are to be commended for drawing attention to the great constraint of cognitive training—that is, the potential for improvements on only the restricted set of abilities that were trained.

This has been the bugbear of cognitive training for a long time. Short story with a purpose: In 2001, when I raved about the remarkable results of Klingberg (later published as Olesen et al., 2004) to John Anderson, an esteemed cognitive psychologist at Carnegie Mellon University, he scoffed at the possibility that Klingberg's training might have led to improvements on Raven's Matrices, a measure of generalized intelligence. "People have been looking into this for a century. If working memory training improved intelligence, schools would be filled with memory training courses rather than math and language courses," he said (or something to that effect). This issue of training and generalization is not new, and the results of Owen and colleagues are consistent with a large body of twentieth-century research.

Owen, therefore, reminds us of an important issue in the current generation of excitement about neuroplasticity: behavioral effects are likely to be small for distal generalization. The possibility of striking results is likely going to require something well beyond what is encountered in everyday or casual experience.

One way to improve on casual experience is dynamic titration. It is reasonably well established that when faced with a task of fixed difficulty, people will begin to asymptote on accuracy and then get faster and faster, with no hope of generalization. The online methods are mum about how this concern was addressed. (One certainly hopes that it was.)

We have recently demonstrated that dynamic titration on an n-back task, in the context of a broader working memory training, can provide local generalization (Haut et. al, 2010). In that study, we examined changes in prefrontal cortical activity with cognitive training compared to an active placebo control in schizophrenia patients. We found that training had provided a stimulus-general improvement in the trained task, and that this improvement mapped onto greater frontopolar and dorsolateral prefrontal cortex activity. This result was therefore quite similar to that reported in healthy adults by Olesen (Olesen et al., 2004). I don't think I'll press for working memory training courses in my local school yet, but Owen won't be the reason.

References:

Olesen PJ, Westerberg H, Klingberg T. Increased prefrontal and parietal activity after training of working memory. Nat Neurosci. 2004;7(1):75-9. Epub 2003 Dec 14. Abstract

Haut KM, Lim KO, MacDonald AW III. Prefrontal cortical changes following cognitive training in patients with chronic schizophrenia: effects of practice, generalization and specificity. Neuropsychopharmacology. 2010 Apr 28. Abstract

View all comments by Angus MacDonald