14 May 2012. Successful social interactions contribute to good outcomes in schizophrenia, but impairments in the interpretation of others’ emotions, whether conveyed by facial expressions or speech, place many patients at a disadvantage. A new study of spoken language that focused on pitch and intensity (loudness)—important nonverbal conveyors of emotion—concludes that deficits in auditory emotion recognition seen in schizophrenia are primarily due to dysfunctions in the perception of pitch rather than intensity.
“It’s important from a research point of view: in order to pinpoint what needs to be fixed, we need to understand the underlying mechanisms,” says study author Daniel C. Javitt of the Nathan Kline Institute for Psychiatric Research in Orangeburg, New York. “But it’s probably even more important from a family and caregiver point of view—just realizing that patients have these deficits, and that when they’re not responding appropriately to your tone of voice it’s not because they’re not trying, but because they’re not hearing it.”
Javitt and colleagues have previously reported that deficits in pitch perception play a greater role than perception of intensity in the compromised emotion recognition seen in schizophrenia patients (Leitman et al., 2005), and have also identified structural deficits in the primary auditory cortex in schizophrenia patients (see SRF related news story).
The new research, which appears in the April issue of the American Journal of Psychiatry, aimed to replicate the pitch versus intensity findings in a larger group of patients, and also to validate a simplified, briefer version of the auditory battery that Javitt and colleagues had employed in their earlier studies of emotion recognition.
For the primary sample, 92 patients and 73 comparison subjects listened to a battery of audio recordings of British actors speaking short phrases that are emotionally neutral semantically (e.g., “It’s 10 o’clock.”), but were invested with emotion through changes in either pitch or intensity (the auditory emotion recognition test, or AER). The “full” version of the battery consisted of 88 such utterances conveying anger, disgust, fear, happiness, sadness, or “no emotion”; a briefer version included just 32 stimuli and eliminated utterances signifying disgust. The majority of the sample was tested using both versions of the battery, but a small number of subjects (15 patients and 12 comparison subjects) heard one or the other.
Subjects in the primary sample also completed a tone-matching task, and were assessed for visual detection of facially expressed emotion (the Penn Emotion Recognition Task [ER-40]) and for global cognitive functioning (the WAIS-III Processing Speed Index [PSI]). Fewer patients (N = 32) were included in a replication sample along with 188 comparison subjects.
The authors note that both anger and happiness can be conveyed by either pitch or loudness, with increased intensity used to communicate “hot anger” and, in the case of happiness, elation. Accordingly, their analysis of subjects’ recognition of these two emotions treated each of these acoustic features separately.
As in previous studies by Javitt and colleagues, patients showed impairment on each of the emotions compared to the comparison group, and again there was a highly significant (p <0.0001) difference in the performance of the two test groups when the results were analyzed in terms of the pitch versus loudness, with patients recognizing emotions conveyed by intensity on par with the comparison subjects, but falling far short when stimuli conveyed pitch-based “cold anger” or happiness. The researchers observed no difference in these results whether the full or brief battery was used, including in patients who were tested on both forms of the battery.
These results were paralleled on tone-matching tests, on which patients performed more poorly overall. And there was a significant correlation between the patients’ tone-matching performance and their ability to recognize pitch-based expressions of emotion. This correlation remained significant even when results from the PSI were taken into account.
On the ER test of visual emotion recognition, patients performed significantly below comparison subjects in recognizing sadness, fear, and no emotion, with deficits in the detection of happiness and anger approaching significance as well. However, in contrast to the results seen on the AER, where significant differences between patients and comparison subjects held across all emotions tested, but not when each emotion was compared separately, when the ER-40 and AER results were analyzed there were significant correlations in performance within the visual and auditory stimuli for individual emotions, “suggesting some shared emotional processing disturbance in addition to contributions of specific sensory deficits,” write the authors.
In a path analysis of all three measures, performance on the tone matching and PSI tests were most strongly correlated with, and contributed about equally to, the deficits in auditory emotion recognition revealed in the AER.
Can auditory deficits be treated?
In a recent interview with SRF, Javitt stressed that his continued research on basic sensory deficits in schizophrenia does not imply that disturbances in “top-down” functions such as social cognition are unimportant. “It’s a whole-brain argument,” he says. However, he does believe that bottom-up components such as pitch perception are more important than is generally recognized, contributing “about 50 percent” to the schizophrenia phenotype. As for the clinical relevance of this viewpoint, he says, “Unless you remediate these deficits, it would be very hard to teach patients emotional recognition. They just can’t hear the underlying transitions.”
However, remediation is no small matter, Javitt says, because of the absence of plasticity in the adult auditory cortex. Even if NMDA-based drugs could somehow introduce limited plasticity in this region, he says, auditory training is a zero-sum game that requires careful focus. “If you start training people on one set of tones, they get better at that set, but they get worse at others—it’s a competition,” Javitt explains, “So you put a lot of work into finding the frequencies that are needed for things like voice emotion recognition.”
But Javitt is encouraged by recent work of Sophia Vinogradov’s group at UCSF showing that training can improve not only auditory perception in schizophrenic patients, but top-down factors, such as verbal memory, as well (see, e.g., Adcock et al., 2009; see also SRF Webinar).
Javitt says that the auditory perceptual deficits seen in schizophrenia are similar to those that have been identified in dyslexia, a commonality that may throw light on the poor reading skills of most schizophrenia patients, many of whom read at or above their grade level before the illness emerges. On the other hand, he hopes that tests such as his group’s new simplified test battery will prove useful in determining the extent of auditory deficits in the prodrome and as a predictive tool.—Pete Farley.
Gold R, Butler P, Revheim N, Leitman DI, Hansen JA, Gur RC, Kantrowitz JT, Laukka P, Juslin PN, Silipo GS, Javitt DC. Auditory emotion recognition impairments in schizophrenia: relationship to acoustic features and cognition. Am J Psychiatry. 2012 Apr 1;169(4):424-32. Abstract