Some findings and research lines I thought were worth highlighting or providing an overview for:
- Why 50/50 go/nogo rates might be a good idea for threat-related emotional Go/Nogo tasks
- Anticipatory automaticity and a reliable spatial attentional bias task
- Attentional carryover: Effects of trial-to-trial history on attentional bias variability
- Resolution-"resilient", threshold-free, topography-based cluster analysis
- Freezing as active threat anticipation
- The R3 model of dual processing
Task features of the emotional Go/Nogo task /\The emotional Go/Nogo task, as opposed to a standard non-emotional Go/Nogo task, aims to study the kind of impulsivity that emotional stimuli evoke - not the more motoric kind of impulsivity induced by prepotent responding. Despite this essential distinction, anecdotal experience suggests that some researchers have generalized the rule that go frequency should be much higher than nogo frequency, and certainly not at the 50-50 level.
In a paper in Consciousness and Cognition (Gladwin, Möbius & Vink, 2019) concerning a series of emotional Go/Nogo studies, we presented arguments why this seems to be a mistake. Adapted from that paper:
- Testing whether threat-stimuli induce impulsive responses does not depend on having a (strongly) prepotent response induced by the non-emotional manipulation of go-likelihood. If an emotional stimulus triggers a response or lowers the response threshold, it could well do so without a prepotent response manipulation. To require response prepotency is to confuse different research aims. It could be interesting, of course, to test whether the degree of response prepotency interacts with emotion-induced impulsivity.
- The 50-50 distribution avoids the disadvantage of a relatively small number of trials in the no-go condition. If you aim to use this trial category, e.g., in psychophysiological work, that's a waste if there is, in fact, no advantage for your particular case.
- In the task-relevant version of the task, unequal go- and nogo-frequencies would result in strongly differing block-contexts, which would be confounded with trial type; and hence, results would be difficult to interpret. That is: threat-go trials only occur in threat-go blocks, in which participants would be exposed to primarily threatening stimuli; while on threat-nogo blocks, most stimuli would be non-threatening. Since task-relevant task versions were found to be far more sensitive to threat-induced impulsivity, this issue could block this more effective task feature from being used.
- Unequal go and nogo distributions have the disadvantage of confounding the nogo-manipulation with frequency and hence processes such as expectation or attention, which could also conceivably interact with emotional stimuli. Similarly to the previous point, this is a potentially fatal flaw, unlike merely not having response prepotency.
- Finally, it is not necessarily methodologically optimal to have a higher baseline level of impulsivity induced by go-frequency; this could for example lead to ceiling effects on commission errors and reduce the ability to detect additional emotional effects.
In terms of results, effects of emotional stimuli on both RT and accuracy were in fact strong and replicable with 50-50 proportions. A second important point, however, was that this emotion-induced impulsivity was found - also in confirmatory follow-up studies - to be dependent on the task-relevance of the emotional stimuli. An additional study with higher go probability was run and no effects were found (and indeed, would have been difficult to interpret anyway following the argument above). Thus, it would seem to be a mistake to consider a high ratio of go trials optimal or, even worse, necessary to study emotion-induced impulsivity using Go/Nogo tasks.
Anticipatory automaticity and the cued Visual Probe Task (cVPT) /\The predictive cue-based Visual Probe Task, cVPT or predVPT for short (Gladwin, 2017, but first refined to its current form with separate training phases in Gladwin & Vink, 2018), is a possibly useful variant of the dot-probe task based on predictive cues that provide information about where emotionally salient but task-irrelevant stimuli may appear. Trials on which emotionally salient stimuli (but not probe stimuli) occur are intermixed with trials on which probe stimuli (but not the emotionally salient stimuli) occur. This provides a bias score derived from performance on probe trials that is based on predicted stimulus categories rather than actually-presented exemplars. The bias was termed the anticipatory attentional bias or, more theoretically neutrally, predictive cue-based attentional bias.
For a demo, please see here.
The use of predictive cues, together with other task features, was expected to reduce undesirable trial-to-trial variation, in particular that caused by which exemplars of stimulus categories happen to have been presented on a particular trial. The cVPT was therefore used to study Attentional Bias Variability (Gladwin & Vink, 2018) involving alcohol stimuli, which was predicted and confirmed to be associated with conflicting automatic associations, measured using dual Single-Target Implicit Association Tests. Further, the anticipatory bias was correlated with risky drinking. The reduction of exemplar-related variation was also thought to potentially improve the reliability of bias scores. Good reliability (around .7 to .8) of the anticipatory bias was indeed found for alcohol (Gladwin, 2019), and individual differences in bias were again associated with risky drinking. A series of follow-up studies (Gladwin, Banic, Figner & Vink, 2020) further explored the nature of the bias and individual differences. One important finding of these studies was the validation of the previously found reliability in terms of predicted stimulus categories rather than cue features, as reliability was very low when the cues were made non-predictive.
Anticipatory spatial attentional bias to threat (Gladwin, Möbius, McLoughlin & Tyndall, 2019) was also found to exist, and this was replicated using an improved procedure (Gladwin, Figner & Vink, 2019) for assessing the reliability of the anticipatory component of the bias that involved reversing the predictive values of the cues. For both studies, this reliability was high when considered in the context of reports of near-zero reliability for traditional tasks and the relatively complex task designs, but nevertheless only modest in psychometric terms; and it was lower, around .4 to .5, than the reliability found for the alcohol-related bias. However, a further study (Gladwin & Vink, 2020) confirmed that when the task and procedure were optimized for reliability, a similar split-half reliability as for the alcohol bias, around .7, could be achieved. Reliability increased to .89 when probe location probability was manipulated to alternate between blocks with a task-induced bias towards versus away from threat (Gladwin, Halls & Vink, 2021).
The interpretation of the bias as anticipatory was based on the outcome-based cognitive response selection model (defining "R3-reflectivity", see below) that originally motivated the anticipatory attentional bias work. This was supported in a predictive ABM training study (Gladwin, Möbius & Becker, 2019), in which training towards or away from predicted threat generalized to post-training stimulus-evoked bias; i.e., it's not just about the visual features of the cues becoming salient themselves via conditioning. This approach may also address a potential issue with usual Attention Bias Modification paradigms, namely that even when training attention away from certain stimulus categories, those categories are still task-relevant and therefore being made or kept salient, termed the salience side-effect (Gladwin, 2017).
There are lots of open questions remaining. For instance, the interpretation of effects so far has been relatively equivocal on specific underlying processes, taking an overall "attention as selection for action" perspective in which an "attentional" effect just needs to be caused by the (predicted) spatial positions of salient visual stimuli. Further research is needed to determine which specific mechanisms underlie the bias (although this would seem to be true for pretty much every relatively complex cognitive-emotional task once you start digging), possibly using psychophysiological or neuroimaging methods, and which might well result in finding that there's a trade-off between a task with good psychometric properties and a task that attempts to break apart naturally coordinated processes. But the results so far do add to the evidence that the well-known psychometric problems with measuring attentional bias using traditional task variants should not lead to premature dismissal of such behavioural measures in general: reliability of the bias can be sufficiently high in principle. There are new tasks out there with various different features (predictive cues being just one example) that promise stronger empirical foundations.
Attentional carryover: Effects of trial-to-trial history on attentional bias variability /\One potential cause of within-subject Attentional Bias Variability - whether this is considered noise or an informative measure in itself - concerns trial-to-trial carryover effects (Gladwin & Figner, 2019). Carryover refers to the dependence of the attentional bias on trial N on the probe location on trial N - 1. This was found to be the case in a so-termed "diagonalised" Visual Probe Task optimized for studying trial-to-trial fluctuations, for colours and threat stimuli. Responding to a probe stimulus at the location of a given colour induced an attentional bias towards that colour on the next trial. Carryover for threat versus neutral stimuli was asymmetrical: a bias towards threat versus neutral cue was only found following trials on which the participant responded to a probe at the location of the threat versus neutral cue. This pattern of previous-probe-dependence of the threat-related bias was also found for the anticipatory attentional bias (Gladwin, Figner & Vink, 2019). The effect was subsequently replicated with different facial stimuli and task variants (Gladwin, Jewiss & Vink, 2020). In this study it was also tested whether considering the previous trial's target location could result in more reliable scores even with exemplar-based (rather than predictive cue-based) biases and replicable relationships with various mental health measures, but this was not the case.
Such carryover effects may be related to trauma symptoms.
Resolution-"resilient", threshold-free, topography-based cluster analysis (initially made for fMRI) /\Landscape-based cluster analysis (Gladwin, Vink & Mars, 2016) can be used to define clusters (e.g., in fMRI data) topographically, rather than based on a cut-off of a statistical threshold. Recognizing a "blob" intuitively involves looking at its shape, which was formalized as the second derivative of activation over space in this method. A recursive clustering function defined 3D blobs of arbitrary shapes, in which "threads" of spreading activation searched for edges, i.e., inflection points, in a search pattern flowing from a local maximum. To each blob defined this way an activation score combining blob size and statistical significance of effects in voxels contained in the blob could be assigned.
A null hypothesis distribution of the maximum activation scores over all blobs found over the whole brain can then be generated using randomization / permutation of the underlying individual b-maps, generating t-maps sampled from the null hypothesis distribution. Using a maximal activation score found in fewer than 5% of these permutation-based t-maps thus provides control of whole-brain familywise error rate, taking account of dependence between voxels. One aim of this method is to avoid an arbitrary threshold for the initial definition of clusters. Further, the permuation testing of activation scores scales up happily with better spatial resolution (i.e., more and smaller voxels), unlike traditional familywise correction in which statistical power would suffer from a larger number of voxels.
Threat anticipation and freezing /\Freezing (operationalized as body sway reduction and bradycardia in a threatening context) may be a preparatory, rather than "helpless", state (Gladwin, Hashemi, van Ast & Roelofs, 2016). We used a Virtual Shooting Task to manipulate the ability to prepare defensive responses to threat by making participants either armed or unarmed. Freezing was very strongly related to being armed, and within the armed-condition additionally to the degree of threat. The scientific concept of freezing has to be separated from the idea of "being frozen in fear." The task has been used to study neural effects related to freezing and the freeze-fight transition (Hashemi et al., 2019). Interactions between threat effects and sleep deprivation were studied using a stop-signal version of the task (van Peer, Gladwin & Nieuwenhuys, 2019). This showed that threat affected impulsivity while sleep deprivation caused a general reduction of accuracy.
Freezing effects were further explored behaviourally in a subsequent study on distraction and freeze-terminating stimuli (Gladwin & Vink, 2018). We looked at the effect of only-anticipated versus actual virtual attacks as distractors in an emotional Sternberg task. While an attack was impending, reaction times were slowed; but this appeared to be due to a reversible inhibited state that was released after the attack actually occurred. Anticipatory slowing of responses on a threat-unrelated task was confirmed in a subsequent series of studies (Gladwin & Vink, 2020). In these studies visually neutral predictive cues were used as distractors, instead of the previously used face versus no-face cues. This was done to focus on improving the comparability between impending (but not actual) attack versus control (no risk of attack) trials. (Comparisons between anticipated and actual attack could better be made using the task design in the previous study.) These studies further explored temporal dynamics of the response by varying the time between the cue and the probe stimulus, showing that the slowing effect arose after around 600 ms and had decreased by 1200 ms. Note that this contrasts with the threat-induced impulsivity found in, e.g., Gladwin, Möbius & Vink (2019). This minor paradox was interpreted in terms of selective lowering of the response threshold to threat-relevant stimuli.