|Jump to a year:||2012 ~ 2011 ~ 2010 ~ 2009 ~ 2007 ~ 2006 ~ 2005 ~ 2004|
|2003 and earlier|
Click on the symbols to expand the abstract of any paper. Click on the symbols to open a pdf of the article in a new window (see copyright notice below). You can also view a list of the papers organised thematically.
|2012 and in press|
People can switch quickly and flexibly from one task to another, but suffer the effects of between-task competition when they do so: After switching, they tend to be distracted by irrelevant stimulus information and hampered by incorrect actions associated with recently-performed tasks. This competition results in performance costs of switching, as well as a bias against switching when there is choice over which task to perform, particularly when switching from a difficult task to an easier one. Two experiments investigated the locus of these between-task competition effects in voluntary task switching. Participants switched between an easy location classification and a harder shape classification, making two responses on each trial: the first to register their task choice, the second to perform the chosen task on a subsequently presented stimulus. The results indicated that participants chose to perform the difficult shape task more often than the easier location task, evidence that between-task competition affects intentions that are expressed independently of task-specific actions. The bias was stronger in participants with faster choice speed, suggesting that these influences are relatively automatic. Moreover, even though participants had unlimited time to choose and prepare a task before stimulus presentation, their subsequent performance was nonetheless sensitive to persisting effects of between-task competition. Altogether these results indicate the pervasive influence of between-task competition, which affects both the expression of global task intentions and the production of task-specific actions.
This EEG study investigated the impact of between-task competition on intentional control in voluntary task switching. Anticipatory preparation for an upcoming task switch is a hallmark of top-down intentional control. Meanwhile, asymmetries in performance and voluntary choice when switching between tasks differing in relative strength reveal effects of between-task competition, reflected in a surprising bias against switching to an easier task. Here we assessed the impact of this bias on EEG markers of intentional control during preparation for an upcoming task switch. The results revealed strong and varied effects of between-task competition on EEG markers of global task preparation—a frontal contingent negative variation (CNV), a posterior slow positive wave, and oscillatory activity in the alpha band (8–12 Hz) over posterior scalp sites. In contrast, we observed no between-task differences in motor-specific task preparation, as indexed by the lateralized readiness potential (LRP) and by motor-related amplitude asymmetries (MRAA) in the mu (9–13 Hz) and beta (18–26 Hz) frequency bands. Collectively these findings demonstrate that between-task competition directly influences the formation of top-down intentions, not only their expression in overt behaviour. Specifically, this influence occurs at the level of global task intention rather than the preparation of specific actions.
A large corpus of data has demonstrated the sensitivity of behavioral and neural measures to variation in the availability of reward. The present study aimed to extend this work by exploring reward motivation in an RSVP task using complex satellite imagery. We found that reward motivation significantly influenced neural activity both in the preparatory period and in response to target images. Pre-stimulus alpha activity and, to a lesser degree, P3 and CNV amplitude were found to be significantly predictive of reward condition on single trials. Target-locked P3 amplitude was modulated both by reward condition and by variation in target-detectability inherent to our task. We further quantified this exogenous influence, showing that P3 differences reflected single-trial variation in P3 amplitude for different targets. These findings provide theoretical insight into the neural indices of reward in an RSVP task, and have important applications in the field of satellite imagery analysis.
Errors in choice tasks have been shown to elicit a cascade of characteristic components in the human event-related potential – the error-related negativity (Ne/ERN) and the error positivity (Pe). Despite the large number of studies concerned with these components, it is still unclear how they relate to error awareness as measured by overt error signaling responses. In the present study, we considered error awareness as a decision process in which evidence for an error is accumulated until a decision criterion is reached, and hypothesized that the Pe is a correlate of the accumulated decision evidence. To test the prediction that the amplitude of the Pe varies as a function of the strength and latency of the accumulated evidence for an error, we manipulated the speed-accuracy trade-off in a brightness discrimination task while participants signaled the occurrence of errors. Based on a previous modeling study, we predicted that lower speed pressure should be associated with weaker evidence for an error and, thus, with smaller Pe amplitudes. As predicted, average Pe amplitude was decreased and error signaling was impaired in a low speed pressure condition compared to a high speed pressure condition. In further analyses, we derived single-trial Pe amplitudes using a logistic regression approach. Single-trial amplitudes robustly predicted the occurrence of signaling responses on a trial-by-trial basis. These results confirm the predictions of the evidence accumulation account, supporting the notion that the Pe reflects accumulated evidence for an error and that this evidence drives the emergence of error awareness.
Cognitive control and memory are fundamentally intertwined, but interactions between the two functions have only recently received sustained research interest. Here we introduce a novel paradigm to investigate how control influences memory encoding and, conversely, how memory measures can provide new insight into flexible cognitive control. Participants switched between object and word classifications, then were tested for their recognition memory of items presented during switching. Switching impaired memory for task-relevant information but actually improved memory for task-irrelevant information, indicating that control demands reduced the selectivity of memory encoding rather than causing a general memory decline. Recognition memory strength provided a robust trial-by-trial measure of the effectiveness of cognitive control that “predicted” earlier task-switching performance and revealed a substantial, but switch-specific, influence of bottom-up factors on between-task competition. Collectively, our findings illustrate how cognitive control and bottom-up factors interact to have simultaneous influence on both current performance and future memory.
Impairments in cognitive control generating deviant adaptive cognition have been proposed to account for the strong preference for repetitive behavior in autism. We examined if this preference reflects intentional deficits rather than problems in task execution in the broader autism phenotype using the Autism-Spectrum Quotient (AQ). Participants chose between two tasks differing in their relative strength by indicating first their voluntary task choice and then responding to the subsequently presented stimulus. We observed a stronger repetition bias for the harder task in high AQ participants, with no other differences between the two groups. These findings indicate that the interference between competing tasks significantly contributes to repetitive behavior in autism by modulating the formation of task intentions when choosing tasks voluntarily.
Intense research interest over the past decade has yielded diverse and often discrepant theories about the function of anterior cingulate cortex (ACC). In particular, a dichotomy has emerged between neuropsychological theories suggesting a primary role for ACC in motivating or “energizing” behavior, and neuroimaging-inspired theories emphasizing its contribution to cognitive control and reinforcement learning. To reconcile these views, we propose that ACC supports the selection and maintenance of “options” – extended, context-specific sequences of behavior directed toward particular goals – that are learned through a process of hierarchical reinforcement learning. This theory accounts for ACC activity in relation to learning and control while simultaneously explaining the effects of ACC damage as disrupting the motivational context supporting the production of goal-directed action sequences.
People are capable of robust evaluations of their decisions: they are often aware of their mistakes even without explicit feedback, and report levels of confidence in their decisions that correlate with objective performance. These metacognitive abilities help people to avoid making the same mistakes twice, and to avoid overcommitting time or resources to decisions that are based on unreliable evidence. In this review, we consider progress in characterising the neural and mechanistic basis of these related aspects of metacognition—confidence judgments and error monitoring—and identify crucial points of convergence between methods and theories in the two fields. This convergence suggests that common principles govern metacognitive judgments of confidence and accuracy, in particular a shared reliance on post-decisional processing within the systems responsible for the initial decision. However, research in both fields has focused rather narrowly on simple, discrete decisions—reflecting the correspondingly restricted focus of current models of the decision process itself—raising doubts about the degree to which discovered principles will scale up to explain metacognitive evaluation of real-world decisions and actions that are fluid, temporally extended, and embedded in the broader context of evolving behavioural goals.
Neural mechanisms of cognitive control are hypothesized to support flexible, goal-directed behavior by representing task-relevant information in order to guide thought and action. The conflict monitoring theory proposes that anterior cingulate cortex (ACC) contributes to cognitive control by detecting conflicts in information processing and signaling when increased top-down control is required. This theory provides a computationally specified framework for understanding how cognitive control is recruited, and explains a large literature of human neuroimaging studies showing ACC activity in conditions of increased cognitive demand. Predictions from the theory have been tested and consistently confirmed in behavioral and neuroimaging experiments with human subjects. However, challenging findings from patients with ACC lesions, and from studies of ACC function in non-human primates, suggest that conflict monitoring may be just one facet of the broader role of ACC in performance monitoring and the optimization of behavior.
|2011||Back to top|
Parieto-occipital EEG alpha power and subjective reports of attentional state are both associated with visual attention and awareness, but little is currently known about the relationship between these two measures. Here, we bring together these two literatures to explore the relationship between alpha activity and participants’ introspective judgements of attentional state as each varied from trial to trial during performance of a visual detection task. We collected participants’ subjective ratings of perceptual decision confidence and attentional state on continuous scales on each trial of a rapid serial visual presentation (RSVP) detection task while recording EEG. We found that confidence and attentional state ratings were largely uncorrelated with each other, but both were strongly associated with task performance and post-stimulus decision-related EEG activity. Crucially, attentional state ratings were also negatively associated with prestimulus EEG alpha power. Attesting to the robustness of this association, we were able to classify attentional state ratings via prestimulus alpha power on a single-trial basis. Moreover, when we repeated these analyses after smoothing the time series of attentional state ratings and alpha power with increasingly large sliding windows, both the correlations and classification performance improved considerably, with the peaks occurring at a sliding window size of approximately seven minutes worth of trials. Our results therefore suggest that slow fluctuations in attentional state in the order of minutes are reflected in spontaneous alpha power. Since these subjective attentional state ratings were associated with objective measures of both behaviour and neural activity, we suggest that they provide a simple and effective estimate of task engagement that could prove useful in operational settings that require human operators to maintain a sustained focus of visual attention.
Grinband et al., 2011 compare evidence that they have collected from a neuroimaging study of the Stroop task with a simulation model of performance and conflict in that task, and interpret the results as providing evidence against the theory that activity in dorsal medial frontal cortex (dMFC) reflects monitoring for conflict. Here, we discuss several errors in their methods and conclusions and show, contrary to their claims, that their findings are entirely consistent with previously published predictions of the conflict monitoring theory. Specifically, we point out that their argument rests on the assumption that conflict must be greater on all incongruent trials than on all congruent trials—an assumption that is theoretically and demonstrably incorrect. We also point out that their simulations are flawed and diverge substantially from previously published implementations of the conflict monitoring theory. When simulated appropriately, the conflict monitoring theory predicts precisely the patterns of results that Grinband et al. take to present serious challenges to the theory. Finally, we note that their proposal that dMFC activity reflects time on task is theoretically weak, pointing to a direct relationship between behavior (RT) and neural activity but failing to identify any intervening psychological construct to relate the two. The conflict monitoring theory provides such a construct, and a mechanistic implementation that continues to receive strong support from the neuroimaging literature, including the results reported by Grinband et al.
The present study investigated adjustments of selective attention following errors and their relation to the error-related negativity (Ne/ERN), a correlate of errors in event-related potentials. We hypothesized that, if post-error adjustments reflect an adaptive mechanism that should prevent the occurrence of further errors, then adjustments of attentional selectivity should be observed only following errors due to insufficient selective attention. To test this, a four-choice flanker task was used in which errors due to insufficient selective attention (flanker errors) and other errors (nonflanker errors) could be distinguished. We found strong adjustments of selective attention following flanker errors but not following nonflanker errors. Moreover, the Ne/ERN amplitude was correlated with adjustments of selective attention on a trial-by-trial basis. The results provide support for the notion that the Ne/ERN is a correlate of adaptive adjustments following errors.
Errors in speeded decision tasks are associated with characteristic patterns of brain activity. In the scalp-recorded EEG, error processing is reflected in two components, the error-related negativity (ERN) and the error positivity (Pe). These components have been widely studied, but debate remains regarding the precise aspects of error processing they reflect. The present study investigated the relation between the ERN and Pe using a novel version of the flanker task to allow a comparison between errors reflecting different causes—response conflict versus stimulus masking. The conflict and mask conditions were matched for overall behavioural performance but differed in underlying response dynamics, as indexed by response time distributions and measures of lateralised motor activity. ERN amplitude varied in relation to these differing response dynamics, being significantly larger in the conflict condition compared to the mask condition. Furthermore, differences in response dynamics between participants were predictive of modulations in ERN amplitude. In contrast, Pe activity varied little between conditions, but varied across trials in relation to participants’ awareness of their errors. Taken together, these findings suggest a dissociation between the ERN and Pe, with the former reflecting the dynamics of response selection and conflict, and the latter reflecting conscious recognition of an error.
An integrative theory of anterior cingulate cortex function: option selection in hierarchical reinforcement learning
In this chapter we propose a new account of ACC function that integrates several salient features of existing theories while aiming to reconcile their inconsistencies. Our account focuses specifically on the dorsal region of ACC believed to be involved in cognitive control, rather than on the rostral-ventral subdivision that is more involved in emotional processing. We propose that dorsal ACC supports the selection and execution of coherent behaviors over extended periods, an idea we formalize in terms of recent advances in the theory of RL that utilize a hierarchical mechanism for action selection to choose between options—sequences of primitive actions associated with particular goals.
|2010||Back to top|
The ability to detect and compensate for errors is crucial in producing effective, goal-directed behavior. Human error processing is reflected in two event-related brain potential components, the error-related negativity (Ne/ERN) and error positivity (Pe), but the functional significance of both components remains unclear. Our approach was to consider error detection as a decision process involving an evaluation of available evidence that an error has occurred against an internal criterion. This framework distinguishes two fundamental stages of error detection—accumulating evidence (input), and reaching a decision (output)—that should be differentially affected by changes in internal criterion. Predictions from this model were tested in a brightness discrimination task that required human participants to signal their errors, with incentives varied to encourage participants to adopt a high or low criterion for signaling their errors. Whereas the Ne/ERN was unaffected by this manipulation, the Pe varied consistently with criterion: A higher criterion was associated with larger Pe amplitude for signaled errors, suggesting that the Pe reflects the strength of accumulated evidence. Across participants, Pe amplitude was predictive of changes in behavioral criterion as estimated through signal detection theory analysis. Within participants, Pe amplitude could be estimated robustly with multivariate machine learning techniques and used to predict error signaling behavior both at the level of error signaling frequencies and at the level of individual signaling responses. These results suggest that the Pe, rather than the Ne/ERN, is closely related to error detection, and specifically reflects the accumulated evidence that an error has been committed.
A Response to Schall JD and Emeric EE (2010): Conflict in Cingulate Cortex Function between Humans and Macaque Monkeys: More Apparent than Real. Brain Behav Evol 75:237–238.
It has recently been shown that the cost associated with switching tasks is eliminated following ‘no-go’ trials, in which response selection is not completed, suggesting that the switch cost depends on response selection. However no-go trials may also affect switch cost by interfering with the effects of task preparation that precede response selection. To test this hypothesis we evaluated switch costs following standard go trials with those following two types of non-response trials: no-go trials, for which a stimulus is presented that indicates no response should be made (Exp 1); and cue-only trials in which no stimulus is presented following the task cue (Exp 2). We hypothesized that eliminating no-go stimuli would reveal effects of task preparation on the switch cost in cue-only trials. We found no switch cost following no-go trials (Exp 1), but a reliable switch cost in cue-only trials (i.e., when no-go stimuli were removed; Exp 2). We conclude that no-go trials can modulate the switch cost, independent of their effect on response selection, by interfering with task preparation, and that the effects of task preparation on switch cost are more directly assessed by cue-only trials.
Voluntary action can be studied by giving participants free choice over which task to perform in response to each presented stimulus. In such experiments, performance costs are observed when participants choose to switch tasks from the previous trial. It has been proposed that these costs primarily index the time-consuming operation of top-down control processes that support voluntary action. Contrary to this view, the present experiments found that greater costs were associated with voluntary switching to the easier task of a pair. These increased switch costs for the easier task were accompanied by a reliable preference of the participants for performing the other, more difficult task. Interference between tasks during response selection was identified as the critical factor driving these effects of task difficulty. Together the findings suggest that participants’ voluntary choices, and the time taken to execute those choices, may not directly index the operation of cognitive control, but instead reflect complex interactions between top-down and bottom-up influences on behavior.
|Back to top|
Neuroimaging studies consistently report activity in anterior cingulate cortex (ACC) in conditions of high cognitive demand, leading to the view that ACC plays a crucial role in the control of cognitive processes. According to one prominent theory, the sensitivity of ACC to task difficulty reflects its role in monitoring for the occurrence of competition, or conflict, between responses in order to signal the need for increased cognitive control. However, a contrasting theory proposes that ACC is the recipient rather than source of monitoring signals, and that ACC activity observed in relation to task demand reflects the role of this region in learning about the likelihood of errors. Response conflict and error likelihood are typically confounded, making the theories difficult to distinguish empirically. The present research therefore used detailed computational simulations to derive contrasting predictions regarding ACC activity and error rate as a function of response speed. The simulations demonstrated a clear dissociation between conflict and error likelihood: fast response trials are associated with low conflict but high error likelihood, whereas slow response trials show the opposite pattern. Using the N2 component as an index of ACC activity, an EEG study demonstrated that when conflict and error likelihood are dissociated in this way, ACC activity tracks conflict and is negatively correlated with error likelihood. These findings support the conflict-monitoring theory and suggest that, in speeded decision tasks, ACC activity reflects current task demands rather than the retrospective coding of past performance.
Cognitive neuroscience research relies, in part, on homologies between the brains of human and non-human primates. A quandary therefore arises when presumed anatomical homologues exhibit different functional properties. Such a situation has recently arisen in the case of the anterior cingulate cortex (ACC). In humans, numerous studies suggest a role for ACC in detecting conflicts in information processing. Studies of macaque monkey ACC, in contrast, have failed to find conflictrelated responses. We consider several interpretations of this discrepancy, including differences in research methodology and cross-species differences in functional neuroanatomy. New directions for future research are outlined, emphasizing the importance of distinguishing illusory cross-species differences from the true evolutionary differences that make our species unique.
|Back to top|
Scalp electroencephalogram (EEG) recordings indicate that regions in medial frontal cortex (MFC) are active following errors. Alcohol consumption reduces this error-related activity, perhaps suggesting that alcohol disrupts the operation of an error monitoring system in MFC. However, it could also be that alcohol consumption affects MFC only indirectly, by impairing stimulus processing and thus making errors harder to detect. This interpretation leads to the prediction that stimulus degradation should likewise reduce error-related activity in MFC. To test this hypothesis, behavioral and EEG data were collected as participants performed a speeded response task with either bright or very dim stimuli. The results using dim stimuli replicated the observed effects of alcohol consumption—with slowed responses accompanied by reduced error-related MFC activity. The sensitivity of MFC to disrupted processing elsewhere in the brain suggests complications in interpreting evidence of disturbed MFC function.
In this chapter we review recent work addressing decisions in which cognitive and emotional demands are simultaneously involved and can at times have opposing effects on behavior. Recent experimental work on three classes of problems—moral decisions, the ultimatum game, and intertemporal choice—has produced a remarkably consistent picture about how decision-making progresses in cases involving competing cognitive and emotional demands. In each of the experimental examples we review, functional MRI (fMRI) data has revealed sets of brain areas that separately correlate with the cognitive and emotional aspects of the task. Further, choices tend to be resolved in favor of which system, cognitive or emotional, has greatest measured activity. When brain activity predominates in the emotion-related brain areas, principally limbic and closely linked cortical areas, choices tend to be resolved in favor of the emotional demand. The opposite choice tends to result when activity predominates in cognition-related areas in dorsolateral prefrontal cortex (DLPFC) and posterior parietal cortex.
It has been proposed that the error-related negativity (ERN) is generated by phase resetting of theta-band EEGc oscillations. The present research evaluates a set of analysis methods that have recently been used to provide evidence for this hypothesis. To evaluate these methods, we apply each of them to two simulated data sets: one set that includes theta phase resetting and a second that comprises phasic peaks embedded in EEG noise. The results indicate that the analysis methods do not effectively distinguish between the two simulated data sets. In particular, the simulated data set constructed from phasic peaks, though containing no synchronization of ongoing EEG activity, demonstrates properties previously interpreted as supporting the synchronized oscillation account of the ERN. These findings suggest that the proposed analysismethods cannot provide unambiguous evidence that the ERNis generated by phase resetting of ongoing oscillations.
|Back to top|
Cognitive control is required to regulate interactions between brain regions to produce effective, purposeful behavior.Weused functional magnetic resonance imaging to investigate the nature of these interactions and the role of prefrontal cortex (PFC) in cognitive control as subjects switched between simple face and word categorization tasks. Face and word stimuli were used because previous research has shown them to activate distinguishable cortical regions, allowing us to measure levels of activity in task-selective brain regions during task switching. We found that activity in brain regions selective for the currently irrelevant task predicted the behavioral cost associated with switching tasks. This finding supports the theory that between-task competition is a critical determinant of behavior. Task switching was also associated with increased activity in a network of regions implicated in cognitive control, including lateral PFC and parietal cortex. Within this network of regions, we observed dissociations between task-selective and general purpose mechanisms. These findings provide support for theories that propose a control hierarchy comprising regions responsible for maintaining task-specific information about rules or goals, and regions involved in the coordination of these goals.
Monitoring of ongoing processing plays a critical role in regulating cognitive function. Two ERP components, the error-related negativity (ERN) and N2, have been proposed to reflect this monitoring function. Specifically, a recent theory suggests that both components reflect monitoring for response conflict by anterior cingulate cortex (ACC). This view appears to be challenged by findings that alcohol consumption and lesions in ACC have dissociable effects on the ERN and N2. Using a computational model of conflict monitoring, the present research demonstrates that the conflict theory can account for these dissociations in terms of the dissociable effects of alcohol and ACC lesions on processing of relevant stimulus information (that determines ERN amplitude) and processing of irrelevant, distracting information (that determines N2 amplitude). The simulation results suggest new interpretations of the cognitive deficits caused by alcohol consumption (in terms of impaired stimulus processing) and ACC lesions (in terms of impaired attentional control).
|Back to top|
The concept of error detection plays a central role in theories of executive control. In this article, the authors present a mechanism that can rapidly detect errors in speeded response time tasks. This error monitor assigns values to the output of cognitive processes involved in stimulus categorization and response generation and detects errors by identifying states of the system associated with negative value. The mechanism is formalized in a computational model based on a recent theoretical framework for understanding error processing in humans (C. B. Holroyd & M. G. H. Coles, 2002). The model is used to simulate behavioral and event-related brain potential data in a speeded response time task, and the results of the simulation are compared with empirical data.
How are we able to focus our attention on the task at hand while ignoring myriad distractions? An elegant neuroimaging study in this issue of Nature Neuroscience finds that, contrary to a widely held view, the prefrontal cortex implements attentional control by amplifying task-relevant information, rather than by inhibiting distracting stimuli.
The feedback negativity is a component of the event-related brain potential that is elicited by feedback stimuli associated with unfavorable outcomes. The present research investigated whether this component reflects an evaluation of the valence of experienced outcomes or a process of learning about actions that led to those outcomes. The latter hypothesis predicts that a feedback negativity should be observed only when negative outcomes are experienced in relation to executed actions. Contrary to this prediction, feedback negativities were observed in simple monetary gambling tasks in which participants made no active choices (Experiment 1) and no overt actions (Experiment 2). However, the amplitude of the component was reduced in these tasks relative to a gambling task in which the outcomes appeared to be contingent upon participants’ response choices. This reduction was correlated with changes in participants’ subjective ratings of involvement in the tasks, suggesting that the evaluative process indexed by the feedback negativity is sensitive to the motivational significance of ongoing events.
Functional neuroimaging research in humans has identified a number of brain areas that are activated by the delivery of primary and secondary reinforcers. The present study investigated how activity in these reward-sensitive regions is modulated by the context in which rewards and punishments are experienced. Fourteen healthy volunteers were scanned during the performance of a simple monetary gambling task that involved a bwinQ condition (in which the possible outcomes were a large monetary gain, a small gain, or no gain of money) and a bloseQ condition (in which the possible outcomes were a large monetary loss, a small loss, or no loss of money). We observed reward-sensitive activity in a number of brain areas previously implicated in reward processing, including the striatum, prefrontal cortex, posterior cingulate, and inferior parietal lobule. Critically, activity in these reward-sensitive areas was highly sensitive to the range of possible outcomes from which an outcome was selected. In particular, these regions were activated to a comparable degree by the best outcomes in each condition–a large gain in the win condition and no loss of money in the lose condition–despite the large difference in the objective value of these outcomes. In addition, some rewardsensitive brain areas showed a binary instead of graded sensitivity to the magnitude of the outcomes from each distribution. These results provide important evidence regarding the way in which the brain scales the motivational value of events by the context in which these events occur.
The error-related negativity (ERN) is an event-related brain potential observed when subjects commit errors. To examine whether the ERNis sensitive to the value of errors, themotivational significance of errors was manipulated in two experiments. In Experiment 1, low and high monetary value errors were compared to evaluate the effect of trial value on the ERN. In Experiment 2, subjects performed a flanker task both while their performance was being evaluated and during a control condition. Consistent with the notion that the error-detection system is sensitive to the significance of errors, the ERN was significantly larger on high-value trials in Experiment 1 and during evaluation in Experiment 2. There were no corresponding effects on the correct response negativity, and no behavioral differences between conditions were evident in either experiment. These results are discussed in terms of the functional role of the ERN in response monitoring.
|Back to top|
According to a recent theory, anterior cingulate cortex is sensitive to response conflict, the co-activation of mutually incompatible responses. The present research develops this theory to provide a new account of the error-related negativity (ERN), a scalp potential observed following errors. Connectionist simulations of response conflict in an attentional task demonstrated that the ERN—its timing and sensitivity to task parameters—can be explained in terms of the conflict theory. A new experiment confirmed predictions of this theory regarding the ERN and a second scalp potential, the N2, that is proposed to reflect conflict monitoring on correct response trials. Further analysis of the simulation data indicated that errors can be detected reliably on the basis of post-error conflict. It is concluded that the ERN can be explained in terms of response conflict, and that monitoring for conflict may provide a simple mechanism for detecting errors.
The signal averaging approach typically used in event-related brain potential (ERP) research assumes that peaks in ERP waveforms reflect neural activity that is uncorrelated with activity in the ongoing EEG. However, this assumption has been challenged by research suggesting that ERP peaks reflect event-related synchronization of ongoing EEG oscillations. Several methods have recently been used to demonstrate that particular ERP peaks result from this kind of event-related synchronization of oscillations. In this study, we investigated the validity of these methods of inference. We simulated epochs of EEG data by superimposing phasic peaks on noise characterized by the power spectrum of the EEG. When applied to the simulated data, the methods in question produced results that have previously been interpreted as evidence of synchronized oscillations, suggesting that the methods can indicate synchronization in conditions where no such synchrony is present. These findings suggest that proposed analysis methods may not effectively disambiguate competing views of the generation of peaks in ERP waveforms.
In this chapter, we review some of the central results from our work on the conflict monitoring hypothesis, with a focus on the synergistic relationship between computational modeling and empirical work. In addition, we present results from new two studies, which were conducted in response to a recent challenge to the conflict monitoring hypothesis.
Relating cognitive and affective theories of the error-related negativity
There is currently some debate over whether the error-related negativity reflects the cognitive process of error detection or an appraisal of the affective or motivational significance of detected errors. This article considers the relationship between these cognitive and affective theories of the error-related negativity, with particular focus on how affective theories might be informed by existing cognitive models. It is argued that the approaches should be viewed as complementary rather than opposed, a possibility that is particularly evident when the likely functional role of affective reactions is considered.
A recent study (Gehring and Willoughby, 2002a; Science, 295, 2279-2282), has reported the observation in humans of an event-related brain potential component that is sensitive to the value of outcomes in a gambling task. This component, labeled medial frontal negativity (MFN), was most pronounced following monetary losses as opposed to monetary gains. In this study, we investigate the relationship between the MFN and the error-related negativity (ERN), a component elicited by feedback indicating incorrect choice performance. We argue that the two components can be understood in terms of a recently proposed theory that predicts the occurrence of such scalp negativities following each stimulus that indicates that ongoing events are worse than expected. The results from two experiments using a gambling task demonstrate that the sensitivity of the ERN to the utilitarian and performance aspect of the feedback depends on which aspect is most salient. The results are also consistent with a similar scalp topography of the MFN and ERN, providing further support for the view that the two components are manifestations of the same underlying cognitive and neural process.
Previous research has shown that two components of the event-related brain potential, the P300 and feedback negativity, are sensitive to information about rewards and penalties. The present study investigated the properties of these components in a simple gambling game that required participants to choose between cards that were unpredictably associated with monetary gains and losses of variable magnitude. The aim was to determine the sensitivity of each component to two critical features of reward stimuli: magnitude (small or large) and valence (win or loss). A double dissociation was observed—with the P300 sensitive to reward magnitude but insensitive to reward valence, and the feedback negativity showing the opposite pattern—suggesting that these two fundamental features of rewarding stimuli are evaluated rapidly, and separately, in the human brain. Subsequent analyses provided further evidence of functional dissociations between the feedback negativity and P300. First, the P300, but not the feedback negativity, showed sensitivity to the reward value of alternative, non-selected stimuli. Second, individual differences in the amplitude of the feedback negativity correlated with individual differences in risk-taking behavior observed following monetary losses, whereas individual differences in P300 amplitude were related to behavioral adjustments observed in response to alternative, unchosen outcomes.
In our event-related functional magnetic resonance imaging (fMRI) experiment, participants learned to select between two response options by trial-and-error, using feedback stimuli that indicated monetary gains and losses. The results of the experiment indicate that error responses and error feedback activate the same region of dorsal anterior cingulate cortex, suggesting that this region is sensitive to both internal and external sources of error information.
Stimuli that elicit a prepotent but incorrect response are typically associated with an enhanced electrophysiological N2 that is thought to index the operation of a control process such as inhibition or conflict detection. However, recent studies reporting the absence of the N2 modulation in go/no-go tasks involving auditory stimuli challenge this view: It is not clear why inhibition or conflict detection should be sensitive to the modality of the stimulus. Here we present electrophysiological data from a go/no-go task suggesting that the relative size of the N2 modulation in visual and auditory tasks depends on the perceptual overlap between the go and no-go stimuli. Stimuli that looked similar but sounded different were associated with a typical visual N2 modulation and the absence of an auditory N2 modulation, consistent with previous findings. However, when we increased the perceptual overlap between the auditory stimuli, a large no-go N2 was observed. These findings are discussed in terms of existing hypotheses of the N2, and clarify why previous studies have not found an N2 modulation in auditory go/no-go tasks.
|Back to top|
|2003 and earlier|
The error-related negativity (ERN) is a negative de£ection in the event-related brain potential associated with error processing. A recent theory holds that the ERNis elicited by the impact of a reward prediction error signal carried by the mesencephalic dopamine system on anterior cingulate cortex. The theory predicts that larger ERNs should be elicitedby unexpectedunfavorable outcomes than by expected unfavorable outcomes. We tested the theory in an experiment in which the frequency of occurrence of reward was varied by condition, reasoning that the system that produces the ERN would come to expect non-reward when rewards were infrequent. Consistent with the theory, we found that larger ERNs were elicited by unexpected absences of reward.
Four experiments investigated the effect of recent selective practice on the cost of switching between two tasks afforded by letter-digit pairs: alphabet-arithmetic and shape comparison. Experiments 1 and 2 found a greater cost associated with switching to the more recently practiced task: evidence that task-set inertia contributes to switching costs. Experiment 3 found this effect to be limited to trials on which a recently trained stimulus followed another such stimulus: a result problematic for all current theories of task-set priming. Experiment 4 showed that the effect of recent practice was eliminated by active preparation for a task-switch: it appears that endogenous task-set preparation reduces the effects of task-set inertia.
Neuroimaging and computational modeling studies have led to the suggestion that response-conflict monitoring by anterior cingulate cortex plays a key role in cognitive control. For example, response conflict is high when a response must be withheld ("NoGo") in contexts in which there is a prepotent tendency to make an overt ("Go") response. An event-related brain potential (ERP) component, the N2, is more pronounced on NoGo as compared to Go trials, and was previously thought to reflect the need to inhibit the Go response. However, the N2 may instead reflect the high degree of response conflict on NoGo trials. If so, an N2 should also be apparent when subjects make a Go response in conditions where NoGo events are more common. To test this hypothesis, we collected high-density ERP data from subjects performing a Go/NoGo task, in which the relative frequency of Go vs. NoGo stimuli was varied. Consistent with our hypothesis, an N2 was apparent on both Go and NoGo trials, and showed the properties expected of an ERP measure of conflict detection on correct trials: (i) it was enhanced for low-frequency stimuli, irrespective of whether these stimuli were associated with generating or suppressing a response; (ii) it was localized to anterior cingulate cortex. This suggests that previous conceptions of the NoGo N2 as indexing response inhibition may be in need of revision. Instead, the results are consistent with the view that the N2 in Go/NoGo tasks reflects conflict arising from competition between the execution and inhibition of a single response.
It has been reported that it is harder to switch to a strong, well-practiced task from a weaker, less-practiced task than vice versa. In three experiments we confirmed this surprising asymmetry and investigated how it is affected by a reduction in the degree of interference between tasks. Experiment 1 progressively delayed the onset of the stimulus attribute associated with the stronger task. Experiments 2 and 3 separated the response sets of the tasks. Both manipulations reduced, without eliminating, interference of the stronger with the weaker task, but reversed the asymmetry of switch costs, resulting in a larger cost of switching to the weaker task. The results are interpreted in terms of a model of the interaction between control input, task strength, and task priming.
A recent study indicates that alcohol consumption reduces the amplitude of the error-related negativity (ERN), a negative deflection in the electroencephalogram associated with error commission. Here, we explore possible mechanisms underlying this result in the context of two recent theories about the neural system that produces the ERN - one based on principles of reinforcement learning and the other based on response conflict monitoring.
Conventional analysis of electroencephalography (EEG) and magnetoencephalography (MEG) often relies on averaging over multiple trials to extract statistically relevant differences between two or more experimental conditions. In this article we demonstrate single-trial detection by linearly integrating information over multiple spatially distributed sensors within a predefined time window. We report an average, single-trial discrimination performance of Az ~ 0.80 and fraction correct between 0.70 and 0.80, across three distinct encephalographic data sets. We restrict our approach to linear integration, as it allows the computation of a spatial distribution of the discriminating component activity. In the present set of experiments the resulting component activity distributions are shown to correspond to the functional neuroanatomy consistent with the task (e.g., contralateral sensory–motor cortex and anterior cingulate). Our work demonstrates how a purely data-driven method for learning an optimal spatial weighting of encephalographic activity can be validated against the functional neuroanatomy.
The anterior cingulate cortex is a region of the brain thought to be involved in actions guided by sensation, cognition, and emotion. This article provides evidence for anterior cingulate cortex function from both human and animal studies using neuroimaging, electrophysiology, lesion, and genetic methodologies.
Switching between two tasks afforded by the same stimuli results in slower reactions and more errors on the first stimulus after the task changes. This "switch cost" is reduced, but not usually eliminated, by the opportunity to prepare for a task switch. While there is agreement that this preparation effect indexes a control process performed before the stimulus, the "residual" cost has been attributed to several sources: to a control process essential for task-set reconfiguration that can be carried out only after the stimulus onset, to probabilistic failure to engage in preparation prior to the stimulus, and to two kinds of priming from previous trials: positive priming of the now-irrelevant task and inhibition of the now-relevant task. The main evidence for the carry-over of inhibition is the observation that it is easier to switch from the stronger to the weaker of a pair of tasks afforded by the stimulus than vice versa. We survey available data on interactions between task switching and three manipulations of relative task strength: pre-experimental experience, stimulus-response compatibility, and intra-experimental practice. We conclude that it is far from universally true that it is easier to switch to the weaker task. Either inhibition of the stronger task-set is a strategy used only in the special case of extreme inequality of strength, or its consequences for later performance may be masked by slower post-stimulus control operations for more complex tasks. Inhibitory priming may also be stimulus specific.
|Back to top|
The documents distributed here have been provided as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.