Some findings and research lines I thought were worth highlighting or providing an overview for:
- Why 50/50 go/nogo rates might be a good idea for threat-related emotional Go/Nogo tasks
- Anticipatory automaticity and a reliable spatial attentional bias task
- Attentional carryover: Effects of trial-to-trial history on attentional bias variability
- Resolution-"resilient", threshold-free, topography-based cluster analysis
- Freezing as active threat anticipation
- Models of reflectivity and automaticity
Task features of the emotional Go/Nogo task /\The emotional Go/Nogo task, as opposed to a standard non-emotional Go/Nogo task, aims to study the kind of impulsivity that emotional stimuli evoke - not the more motoric kind of impulsivity induced by prepotent responding. Despite this essential distinction, anecdotal experience suggests that some researchers have generalized the rule that go frequency should be much higher than nogo frequency, and certainly not at the 50-50 level.
In a paper in Consciousness and Cognition (Gladwin, Möbius & Vink, 2019) concerning a series of emotional Go/Nogo studies, we presented arguments why this seems to be a mistake. Adapted from that paper:
- Testing whether threat-stimuli induce impulsive responses does not depend on having a (strongly) prepotent response induced by the non-emotional manipulation of go-likelihood. If an emotional stimulus triggers a response or lowers the response threshold, it could well do so without a prepotent response manipulation. To require response prepotency is to confuse different research aims. It could be interesting, of course, to test whether the degree of response prepotency interacts with emotion-induced impulsivity.
- The 50-50 distribution avoids the disadvantage of a relatively small number of trials in the no-go condition. If you aim to use this trial category, e.g., in psychophysiological work, that's a waste if there is, in fact, no advantage for your particular case.
- In the task-relevant version of the task, unequal go- and nogo-frequencies would result in strongly differing block-contexts, which would be confounded with trial type; and hence, results would be difficult to interpret. That is: threat-go trials only occur in threat-go blocks, in which participants would be exposed to primarily threatening stimuli; while on threat-nogo blocks, most stimuli would be non-threatening. Since task-relevant task versions were found to be far more sensitive to threat-induced impulsivity, this issue could block this more effective task feature from being used.
- Unequal go and nogo distributions have the disadvantage of confounding the nogo-manipulation with frequency and hence processes such as expectation or attention, which could also conceivably interact with emotional stimuli. Similarly to the previous point, this is a potentially fatal flaw, unlike merely not having response prepotency.
- Finally, it is not necessarily methodologically optimal to have a higher baseline level of impulsivity induced by go-frequency; this could for example lead to ceiling effects on commission errors and reduce the ability to detect additional emotional effects.
In terms of results, effects of emotional stimuli on both RT and accuracy were in fact strong and replicable with 50-50 proportions. A second important point, however, was that this emotion-induced impulsivity was found - also in confirmatory follow-up studies - to be dependent on the task-relevance of the emotional stimuli. An additional study with higher go probability was run and no effects were found (and indeed, would have been difficult to interpret anyway following the argument above). Thus, it would seem to be a mistake to consider a high ratio of go trials optimal or, even worse, necessary to study emotion-induced impulsivity using Go/Nogo tasks.
Anticipatory automaticity and the cued Visual Probe Task (cVPT) /\The predictive cue-based Visual Probe Task, cVPT or predVPT for short (Gladwin, 2017, but first refined to its current form with separate training phases in Gladwin & Vink, 2018), is a possibly useful variant of the dot-probe task based on predictive cues that provide information about where emotionally salient but task-irrelevant stimuli may appear. Trials on which emotionally salient stimuli (but not probe stimuli) occur are intermixed with trials on which probe stimuli (but not the emotionally salient stimuli) occur. This provides a bias score derived from performance on probe trials that is based on predicted stimulus categories rather than actually-presented exemplars. The bias was termed the anticipatory attentional bias or, more theoretically neutrally, predictive cue-based attentional bias.
The two trial types of the task are illustrated below: Picture trials establish (over an explicit training phase, in the current version of the task) the expectation, i.e., of where stimuli from each category may appear; here, blue Os would predict alcohol stimuli, and yellow bars would predict non-alcohol stimuli. Probe trials require a response and provide the behavioural measure of whether attention has been affected by the predictive cues: In the illustration, the probe stimulus appeared at the predicted alcohol location, and would be expected to be relatively fast for riskier drinkers. Note that a non-probe stimulus is presented as well (the VV), since otherwise it seems likely that the basic exogenous attentional process of detecting the appearance of a probe by itself could remove any measurable behavioural effects of salience-driven anticipatory attention.
For a demo, please see here.
The use of predictive cues, together with other task features, was expected to reduce undesirable trial-to-trial variation, in particular that caused by which exemplars of stimulus categories happen to have been presented on a particular trial. The cVPT was therefore used to study Attentional Bias Variability (Gladwin & Vink, 2018) involving alcohol stimuli, which was predicted and confirmed to be associated with conflicting automatic associations, measured using dual Single-Target Implicit Association Tests. Further, the anticipatory bias was correlated with risky drinking. The reduction of exemplar-related variation was also thought to potentially improve the reliability of bias scores. Good reliability (around .7 to .8) of the anticipatory bias was indeed found for alcohol (Gladwin, 2019), and individual differences in bias were again associated with risky drinking. A series of follow-up studies (Gladwin, Banic, Figner & Vink, 2020) further explored the nature of the bias and individual differences. One important finding of these studies was the validation of the previously found reliability in terms of predicted stimulus categories rather than cue features, as reliability was very low when the cues were made non-predictive.
Anticipatory spatial attentional bias to threat (Gladwin, Möbius, McLoughlin & Tyndall, 2019) was also found to exist, and this was replicated using an improved procedure (Gladwin, Figner & Vink, 2019) for assessing the reliability of the anticipatory component of the bias that involved reversing the predictive values of the cues. For both studies, this reliability was high when considered in the context of reports of near-zero reliability for traditional tasks and the relatively complex task designs, but nevertheless only modest in psychometric terms; and it was lower, around .4 to .5, than the reliability found for the alcohol-related bias. However, a further study (Gladwin & Vink, 2020) confirmed that when the task and procedure were optimized for reliability, a similar split-half reliability as for the alcohol bias, around .7, could be achieved. Reliability increased to .89 when probe location probability was manipulated to alternate between blocks with a task-induced bias towards versus away from threat (Gladwin, Halls & Vink, 2021).
The interpretation of the bias as anticipatory was based on the outcome-based cognitive response selection model (defining "R3-reflectivity", see below) that originally motivated the anticipatory attentional bias work. This was supported in a predictive ABM training study (Gladwin, Möbius & Becker, 2019), in which training towards or away from predicted threat generalized to post-training stimulus-evoked bias; i.e., it's not just about the visual features of the cues becoming salient themselves via conditioning. This approach may also address a potential issue with usual Attention Bias Modification paradigms, namely that even when training attention away from certain stimulus categories, those categories are still task-relevant and therefore being made or kept salient, termed the salience side-effect (Gladwin, 2017).
There are lots of open questions remaining. For instance, the interpretation of effects so far has been relatively equivocal on specific underlying processes, taking an overall "attention as selection for action" perspective in which an "attentional" effect just needs to be caused by the (predicted) spatial positions of salient visual stimuli. Further research is needed to determine which specific mechanisms underlie the bias (although this would seem to be true for pretty much every relatively complex cognitive-emotional task once you start digging), possibly using psychophysiological or neuroimaging methods, and which might well result in finding that there's a trade-off between a task with good psychometric properties and a task that attempts to break apart naturally coordinated processes. But the results so far do add to the evidence that the well-known psychometric problems with measuring attentional bias using traditional task variants should not lead to premature dismissal of such behavioural measures in general: reliability of the bias can be sufficiently high in principle. There are new tasks out there with various different features (predictive cues being just one example) that promise stronger empirical foundations.
Attentional carryover: Effects of trial-to-trial history on attentional bias variability /\One potential cause of within-subject Attentional Bias Variability - whether this is considered noise or an informative measure in itself - concerns trial-to-trial carryover effects (Gladwin & Figner, 2019). Carryover refers to the dependence of the attentional bias on trial N on the probe location on trial N - 1. This was found to be the case using a so-termed "diagonalised" Visual Probe Task, which was specifically optimized for studying trial-to-trial fluctuations. Effects were found for different colours and for threat-versus-neutral stimuli. Responding to a probe stimulus at the location of a given colour induced an attentional bias towards that colour on the next trial. Carryover for threat versus neutral stimuli was asymmetrical: a bias towards threat versus neutral cue was only found following trials on which the participant responded to a probe at the location of the threat versus neutral cue. This pattern of previous-probe-dependence of the threat-related bias was also found for the anticipatory attentional bias (Gladwin, Figner & Vink, 2019). The effect was subsequently replicated with different facial stimuli and task variants (Gladwin, Jewiss & Vink, 2020). In this study it was also tested whether considering the previous trial's target location could result in more reliable scores even with exemplar-based (rather than predictive cue-based) biases and replicable relationships with various mental health measures, but this was not the case.
Some forms of carryover effects may be related to trauma symptoms (Gladwin, 2017), although this study used a traditional dot-probe task that did not show a dependence of the bias on the previous probe location as found using the diagonalized VPT.
Interestingly, work by others also indicates that the standard dot-probe task doesn't capture the effect (Maxwell, Fang & Carslon, 2022). The big difference, I suspect, is that the diagonalized task has strong stimulus-response compatibility - you see the probe at a given location, you respond with a key mapped to that location. In the traditional dot-probe task, in contrast, the response is unrelated to probe location, breaking that link.
Resolution-"resilient", threshold-free, topography-based cluster analysis (initially made for fMRI) /\Landscape-based cluster analysis (Gladwin, Vink & Mars, 2016) can be used to define clusters (e.g., in fMRI data) topographically, rather than based on a cut-off of a statistical threshold. Recognizing a "blob" intuitively involves looking at its shape, which was formalized as the second derivative of activation over space in this method. A recursive clustering function defined 3D blobs of arbitrary shapes, in which "threads" of spreading activation searched for edges, i.e., inflection points, in a search pattern flowing from a local maximum. To each blob defined this way an activation score combining blob size and statistical significance of effects in voxels contained in the blob could be assigned.
A null hypothesis distribution of the maximum activation scores over all blobs found over the whole brain can then be generated using randomization / permutation of the underlying individual b-maps, generating t-maps sampled from the null hypothesis distribution. Using a maximal activation score found in fewer than 5% of these permutation-based t-maps thus provides control of whole-brain familywise error rate, taking account of dependence between voxels. One aim of this method is to avoid an arbitrary threshold for the initial definition of clusters. Further, the permuation testing of activation scores scales up happily with better spatial resolution (i.e., more and smaller voxels), unlike traditional familywise correction in which statistical power would suffer from a larger number of voxels.
Threat anticipation and freezing /\Freezing (operationalized as body sway reduction and bradycardia in a threatening context) may be a preparatory, rather than "helpless", state (Gladwin, Hashemi, van Ast & Roelofs, 2016). We used a Virtual Shooting Task to manipulate the ability to prepare defensive responses to threat by making participants either armed or unarmed. Freezing was very strongly related to being armed, and within the armed-condition additionally to the degree of threat. The scientific concept of freezing has to be separated from the idea of "being frozen in fear." The task has been used to study neural effects related to freezing and the freeze-fight transition (Hashemi et al., 2019). Interactions between threat effects and sleep deprivation were studied using a stop-signal version of the task (van Peer, Gladwin & Nieuwenhuys, 2019). This showed that threat affected impulsivity while sleep deprivation caused a general reduction of accuracy.
Freezing effects were further explored behaviourally in a subsequent study on distraction and freeze-terminating stimuli (Gladwin & Vink, 2018). We looked at the effect of only-anticipated versus actual virtual attacks as distractors in an emotional Sternberg task. While a task-irrelevant attack was impending, reaction times on the primary working memory task were slowed; but this appeared to be due to a reversible inhibited state that was released after the attack actually occurred.
Anticipatory slowing of responses on a threat-unrelated task was confirmed in a subsequent series of studies (Gladwin & Vink, 2020). In these studies visually neutral predictive cues were used as distractors, instead of the previously used face versus no-face cues. This was done to focus on improving the comparability between impending (but not actual) attack versus control (no risk of attack) trials. (Comparisons between anticipated and actual attack could better be made using the task design in the previous study.) These studies further explored temporal dynamics of the response by varying the time between the cue and the probe stimulus, showing that the slowing effect arose after around 600 ms and had decreased by 1200 ms. Note that this contrasts with the threat-induced impulsivity found in, e.g., Gladwin, Möbius & Vink (2019). This minor paradox was interpreted in terms of selective lowering of the response threshold to threat-relevant stimuli.
The figure below provides an example of the reaction time curves from the anticipatory study. Curves are plotted for the conditions Safe, Threat and Attack. The horizontal axis shows the Cue-Stimulus Interval [ms], i.e., the time between the presentation of the cue determining the threat of an attack and, for Safe and Threat trials, the appearance of the probe for the working memory task; on Attack trials, the attack stimulus occured first. The primary interest in this study was in the difference between Safe and Threat trials. The figure shows the increase in reaction times due to predicted threat arising around half a second after cue presentation. This followed an initial slowing for both Threat and Safe trials that seems likely to reflect an orienting process.
The R3 "Reflective Cycle" model and computational models of automatic associations /\The R3 model is the Reprocessing/Reentrance and Reinforcement model of Reflectivity, or "Reflective Cycle" model. It's an attempt at deconstructing and redefining dual-process models: see section 5 of Gladwin, Figner, Crone & Wiers (2011) and this chapter (Gladwin & Figner, 2014) for an argument why we should talk about emergent states of impulsive versus reflective processing, defined parametrically in terms of response selection search time, rather than processes with particular features or separable systems. The model generated the anticipatory attentional bias line of research and more generally serves to justify but evolve the continued use of traditional dual-process concepts (e.g., in prospective memory; Gladwin, Jewiss, Banic & Pereira, 2020). The "R3" label was used in tribute to the well-known 3D space of real numbers, since it aims to contain theories spanned by a limited set of relevant concepts. The theoretical aim of the model is not to coin new terms or introduce "new" concepts. Rather it simply proposes a theoretical space based on existing concepts grounded in cognitive neuroscience, within which we may be able to better conceptualize what we now model as automatic versus controlled processes. The activation of neural information processing underlying behaviour and cognition is generally time-dependent, and so simply delaying the final selection of a response may change the preferred response and the available information. The principle illustrated by this model is that impulsive versus reflective behavior can be generated by a continuous underlying parameter - how much will you delay? - rather than different types of processes. However, faster processes (e.g., more strongly reinforced associations, or simpler computations) will naturally dominate response selection more after shorter than longer delays. Further, certain situations will naturally teach individuals to respond optimizing speed versus more elaboration - more "emotional" situations will thus tend to be linked to more impulsive processing. This close relationship between neurocognitive processes underlying reflectivity and learning/adaptation to the environment has been argued to play a core role in understanding how a vast array of biological and environmental factors interact to determine the development of self-regulation (Vink et al., 2020).
Some of the concepts of the model are illustrated in simulations (Gladwin & Figner, 2022, preprint). The simulations hopefully make some points of the model more concrete: (1) how merely changing a parameter that controls response selection time, in combination with temporal dynamics of response value, can lead to more reflective versus more automatic response selections, and (2) how the learning environment can set this reflectivity parameter via reinforcement, by rewarding or punishing fast responses versus taking the time to reflect. The figure below shows an example of a trial with a typical random walk of activation but a dynamic response threshold. If the decay of the threshold had been slow enough, i.e., "more reflective", the early response (represented by hitting the lower curve) would have been avoided and replaced by the response represented by the upper curve.
While the R3 model aims to help conceptualize the relationship between reflective and automatic processing, existing computational semantic models may be translated into the "automatic" side of the equation. Neural network language models learn from actual language usage, in large text corpora, and represent meaning in a hidden layer of, e.g., 300 neurons. These representations have been shown to allow vector algebra to be applied to word meaning; in particular, the cosine distance between representations provides a geometric measure of semantic similarity. A possible way to flesh out "automatic associations" between concepts is therefore to consider a stronger association as a closer semantic similarity. Such similarities were found between the presentations of words related to nature and to mental health, in line with a broadly-defined biophilia hypothesis (Gladwin, Markwell & Panno, 2022). Further, in the context of alcohol-valence associations, semantic similarities defined at item level was found to be associated with congruence effects on an Implicit Association Test (IAT) variant (Gladwin, 2022).
These relationships suggest, theoretically, that language models might reflect an aspect of psychological associations. This could potentially explain why experimental effects involving verbal stimuli are associated with patterns of behaviour, e.g., via appraisal processes. Methodologically, the results raise the hypothetical possibility that model-based scores could be used to select stimuli for various aims - stronger similarities for large effect sizes, but weaker ones for reliability.