Schizophrenia Research Forum - A Catalyst for Creative Thinking

SfN 2010—The Neural Circuitry of Value and Salience

The Society for Neuroscience hosted more than 30,000 researchers at Neuroscience 2010 in San Diego, 13-17 November 2010. Here, we are fortunate to receive a meeting missive from Lu Jin, a graduate student at Yale University, New Haven, Connecticut.


6 December 2010. It has long been known that midbrain dopamine neurons encode “prediction error,” which is highly related to motivation and decision-making. There are many brain regions that project to dopamine neuron areas, but which are critical for providing dopamine neurons with reward or motivation value signals? More generally, where and how is motivation generated and processed, and how does it affect action? These questions were asked by Okihide Hikosaka of the U.S. National Eye Institute, Bethesda, Maryland, in his September 14 SfN Presidential Special Lecture entitled "From Motivation to Action: Neuronal Circuits for Value, Salience, and Information," in which he brought the audience up to date on his latest work towards understanding these fundamental questions.

Hikosaka first sketched how action and motivation is interconnected. A motivation predicts an outcome and stimulates the generation of an action. The action produces an outcome which feeds back to influence the motivation and next action. Thus, the neuronal coding for the value of prediction and reward is crucial for the motivation network. This information is represented by many subcortical areas and neuron types, which constitute a complex network.

In 1998, W. Schultz and colleagues found that dopamine neurons in ventral tegmental area (VTA) encode “reward-prediction error”: they fire more actively when the reward value is higher than expected, but inhibit their firing when the reward value is lower than expected (Schultz, 1998). Later studies revealed that this value includes several aspects, for example, the magnitude and delay of reward. The Hikosaka lab has reported some new features of VTA: midbrain dopamine neurons also code for the motivation to know advanced information about later reward (Bromberg-Martin and Hikosaka, 2009). This advanced information tells the animal whether they can get a reward later or not, which is more advantageous than random information. These neurons are activated by cues predicting advanced information for later reward and inhibited by cues indicating random information.

Hikosaka's lab has identified another area that plays a critical role in reward processing: the lateral habenula (LH), an area that projects to the VTA. LH is known to be involved in many emotional and cognitive functions including stress response, learning and attention. Hikosaka and colleagues found that LH neurons in the monkey respond to reward and reward-predicting cues (Matsumoto and Hikosaka, 2008). Interestingly, they are excited by non-reward-predicting cues and inhibited by reward-predicting cues, which is opposite to the activity of dopamine neurons. Temporally, the excitation of LH neurons started earlier than the inhibition of dopamine neurons. Electrical stimulation of LH inhibited dopamine neurons, the researchers found. These observations suggest that LH neurons inhibit dopamine neurons, thus providing negative reward-related signals.

A recent finding in Hikosaka's lab also shows that, similarly to the LH neurons, neurons in the internal segment of the globus pallidus (GPi) encode reward-related signals. (Hong and Hikosaka, 2008). Neurons in GPi project to LH, suggesting that GPi may initiate reward-related signals through activating LH, which then influences the dopaminergic neurons in the midbrain.

If midbrain dopamine neurons only encode value information, the aversive stimuli (punishment) should reduce neuronal activity because of their negative motivational values. However, the results of previous studies are controversial, some showing inhibition and others showing both inhibition and excitation from aversive stimuli. Hikosaka's lab addressed this question by recording the activity of the same dopamine neuron in response to both reward and punishment. One of their recent studies shows that the increased neuronal response to higher reward value is true only for a subset of dopamine neurons (Matsumoto and Hikosaka, 2009). When they recorded dopamine neuron activity in monkeys during a task with positive and aversive outcomes (liquid juice and air puffs directed at the face, respectively), they found that some dopamine neurons were excited by reward-predicting cues and inhibited by air puff-predicting cues, as the classic view predicts. However, a greater proportion of dopamine neurons increased their activity in response to both of these stimuli.

Hikosaka explains this inconsistency with the “salience hypothesis,” which complements the “value hypothesis.” He suggested that a subset of dopamine neurons and related circuits processes salience information. These neurons are excited similarly by reward and punishment. The other subset of dopamine neurons processes value information, and is excited by reward and inhibited by punishment. Interestingly, “salience neurons” excited by the punishment were located more dorsolaterally in the substantia nigra pars compacta (SNc), whereas “value neurons” inhibited by punishment were located more ventromedially, some in the VTA. Hikosaka suggested that salience circuitry is more involved in searching, whereas value circuitry is involved in learning.—Lu Jin.